Nov 29 04:34:49 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 04:34:49 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 04:34:49 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 04:34:49 localhost kernel: BIOS-provided physical RAM map:
Nov 29 04:34:49 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 04:34:49 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 04:34:49 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 04:34:49 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 04:34:49 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 04:34:49 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 04:34:49 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 04:34:49 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 04:34:49 localhost kernel: NX (Execute Disable) protection: active
Nov 29 04:34:49 localhost kernel: APIC: Static calls initialized
Nov 29 04:34:49 localhost kernel: SMBIOS 2.8 present.
Nov 29 04:34:49 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 04:34:49 localhost kernel: Hypervisor detected: KVM
Nov 29 04:34:49 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 04:34:49 localhost kernel: kvm-clock: using sched offset of 3164990558 cycles
Nov 29 04:34:49 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 04:34:49 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 29 04:34:49 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 29 04:34:49 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 29 04:34:49 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 04:34:49 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 04:34:49 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 04:34:49 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 04:34:49 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 04:34:49 localhost kernel: Using GB pages for direct mapping
Nov 29 04:34:49 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 04:34:49 localhost kernel: ACPI: Early table checksum verification disabled
Nov 29 04:34:49 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 04:34:49 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 04:34:49 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 04:34:49 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 04:34:49 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 04:34:49 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 04:34:49 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 04:34:49 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 04:34:49 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 04:34:49 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 04:34:49 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 04:34:49 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 04:34:49 localhost kernel: No NUMA configuration found
Nov 29 04:34:49 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 04:34:49 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 29 04:34:49 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 04:34:49 localhost kernel: Zone ranges:
Nov 29 04:34:49 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 04:34:49 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 04:34:49 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 04:34:49 localhost kernel:   Device   empty
Nov 29 04:34:49 localhost kernel: Movable zone start for each node
Nov 29 04:34:49 localhost kernel: Early memory node ranges
Nov 29 04:34:49 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 04:34:49 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 04:34:49 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 04:34:49 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 04:34:49 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 04:34:49 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 04:34:49 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 04:34:49 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 04:34:49 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 04:34:49 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 04:34:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 04:34:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 04:34:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 04:34:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 04:34:49 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 04:34:49 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 04:34:49 localhost kernel: TSC deadline timer available
Nov 29 04:34:49 localhost kernel: CPU topo: Max. logical packages:   8
Nov 29 04:34:49 localhost kernel: CPU topo: Max. logical dies:       8
Nov 29 04:34:49 localhost kernel: CPU topo: Max. dies per package:   1
Nov 29 04:34:49 localhost kernel: CPU topo: Max. threads per core:   1
Nov 29 04:34:49 localhost kernel: CPU topo: Num. cores per package:     1
Nov 29 04:34:49 localhost kernel: CPU topo: Num. threads per package:   1
Nov 29 04:34:49 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 04:34:49 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 04:34:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 04:34:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 04:34:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 04:34:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 04:34:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 04:34:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 04:34:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 04:34:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 04:34:49 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 04:34:49 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 04:34:49 localhost kernel: Booting paravirtualized kernel on KVM
Nov 29 04:34:49 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 04:34:49 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 04:34:49 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 04:34:49 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 29 04:34:49 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 29 04:34:49 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 04:34:49 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 04:34:49 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 04:34:49 localhost kernel: random: crng init done
Nov 29 04:34:49 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 04:34:49 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 04:34:49 localhost kernel: Fallback order for Node 0: 0 
Nov 29 04:34:49 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 04:34:49 localhost kernel: Policy zone: Normal
Nov 29 04:34:49 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 04:34:49 localhost kernel: software IO TLB: area num 8.
Nov 29 04:34:49 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 04:34:49 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 04:34:49 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 04:34:49 localhost kernel: Dynamic Preempt: voluntary
Nov 29 04:34:49 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 04:34:49 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 29 04:34:49 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 04:34:49 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 29 04:34:49 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 29 04:34:49 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 29 04:34:49 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 04:34:49 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 04:34:49 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 04:34:49 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 04:34:49 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 04:34:49 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 04:34:49 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 04:34:49 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 04:34:49 localhost kernel: Console: colour VGA+ 80x25
Nov 29 04:34:49 localhost kernel: printk: console [ttyS0] enabled
Nov 29 04:34:49 localhost kernel: ACPI: Core revision 20230331
Nov 29 04:34:49 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 04:34:49 localhost kernel: x2apic enabled
Nov 29 04:34:49 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 04:34:49 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 04:34:49 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 29 04:34:49 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 04:34:49 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 04:34:49 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 04:34:49 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 04:34:49 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 04:34:49 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 04:34:49 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 04:34:49 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 04:34:49 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 04:34:49 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 04:34:49 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 04:34:49 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 04:34:49 localhost kernel: x86/bugs: return thunk changed
Nov 29 04:34:49 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 04:34:49 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 04:34:49 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 04:34:49 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 04:34:49 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 04:34:49 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 04:34:49 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 29 04:34:49 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 29 04:34:49 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 04:34:49 localhost kernel: landlock: Up and running.
Nov 29 04:34:49 localhost kernel: Yama: becoming mindful.
Nov 29 04:34:49 localhost kernel: SELinux:  Initializing.
Nov 29 04:34:49 localhost kernel: LSM support for eBPF active
Nov 29 04:34:49 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 04:34:49 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 04:34:49 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 04:34:49 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 04:34:49 localhost kernel: ... version:                0
Nov 29 04:34:49 localhost kernel: ... bit width:              48
Nov 29 04:34:49 localhost kernel: ... generic registers:      6
Nov 29 04:34:49 localhost kernel: ... value mask:             0000ffffffffffff
Nov 29 04:34:49 localhost kernel: ... max period:             00007fffffffffff
Nov 29 04:34:49 localhost kernel: ... fixed-purpose events:   0
Nov 29 04:34:49 localhost kernel: ... event mask:             000000000000003f
Nov 29 04:34:49 localhost kernel: signal: max sigframe size: 1776
Nov 29 04:34:49 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 29 04:34:49 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 29 04:34:49 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 29 04:34:49 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 29 04:34:49 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 04:34:49 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 04:34:49 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 29 04:34:49 localhost kernel: node 0 deferred pages initialised in 10ms
Nov 29 04:34:49 localhost kernel: Memory: 7765680K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616272K reserved, 0K cma-reserved)
Nov 29 04:34:49 localhost kernel: devtmpfs: initialized
Nov 29 04:34:49 localhost kernel: x86/mm: Memory block size: 128MB
Nov 29 04:34:49 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 04:34:49 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 04:34:49 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 04:34:49 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 04:34:49 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 04:34:49 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 04:34:49 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 04:34:49 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 29 04:34:49 localhost kernel: audit: type=2000 audit(1764390887.095:1): state=initialized audit_enabled=0 res=1
Nov 29 04:34:49 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 04:34:49 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 04:34:49 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 04:34:49 localhost kernel: cpuidle: using governor menu
Nov 29 04:34:49 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 04:34:49 localhost kernel: PCI: Using configuration type 1 for base access
Nov 29 04:34:49 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 29 04:34:49 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 04:34:49 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 04:34:49 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 04:34:49 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 04:34:49 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 04:34:49 localhost kernel: Demotion targets for Node 0: null
Nov 29 04:34:49 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 04:34:49 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 29 04:34:49 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 29 04:34:49 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 04:34:49 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 04:34:49 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 04:34:49 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 04:34:49 localhost kernel: ACPI: Interpreter enabled
Nov 29 04:34:49 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 04:34:49 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 04:34:49 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 04:34:49 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 04:34:49 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 04:34:49 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 04:34:49 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [3] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [4] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [5] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [6] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [7] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [8] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [9] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [10] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [11] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [12] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [13] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [14] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [15] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [16] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [17] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [18] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [19] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [20] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [21] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [22] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [23] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [24] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [25] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [26] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [27] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [28] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [29] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [30] registered
Nov 29 04:34:49 localhost kernel: acpiphp: Slot [31] registered
Nov 29 04:34:49 localhost kernel: PCI host bridge to bus 0000:00
Nov 29 04:34:49 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 04:34:49 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 04:34:49 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 04:34:49 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 04:34:49 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 04:34:49 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 04:34:49 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 04:34:49 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 04:34:49 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 04:34:49 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 04:34:49 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 04:34:49 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 04:34:49 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 04:34:49 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 04:34:49 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 04:34:49 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 04:34:49 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 04:34:49 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 04:34:49 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 04:34:49 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 04:34:49 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 04:34:49 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 04:34:49 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 04:34:49 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 04:34:49 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 04:34:49 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 04:34:49 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 04:34:49 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 04:34:49 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 04:34:49 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 04:34:49 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 04:34:49 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 04:34:49 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 04:34:49 localhost kernel: iommu: Default domain type: Translated
Nov 29 04:34:49 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 04:34:49 localhost kernel: SCSI subsystem initialized
Nov 29 04:34:49 localhost kernel: ACPI: bus type USB registered
Nov 29 04:34:49 localhost kernel: usbcore: registered new interface driver usbfs
Nov 29 04:34:49 localhost kernel: usbcore: registered new interface driver hub
Nov 29 04:34:49 localhost kernel: usbcore: registered new device driver usb
Nov 29 04:34:49 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 04:34:49 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 04:34:49 localhost kernel: PTP clock support registered
Nov 29 04:34:49 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 29 04:34:49 localhost kernel: NetLabel: Initializing
Nov 29 04:34:49 localhost kernel: NetLabel:  domain hash size = 128
Nov 29 04:34:49 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 04:34:49 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 04:34:49 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 29 04:34:49 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 29 04:34:49 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 29 04:34:49 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 29 04:34:49 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 04:34:49 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 04:34:49 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 04:34:49 localhost kernel: vgaarb: loaded
Nov 29 04:34:49 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 04:34:49 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 04:34:49 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 04:34:49 localhost kernel: pnp: PnP ACPI init
Nov 29 04:34:49 localhost kernel: pnp 00:03: [dma 2]
Nov 29 04:34:49 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 29 04:34:49 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 04:34:49 localhost kernel: NET: Registered PF_INET protocol family
Nov 29 04:34:49 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 04:34:49 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 04:34:49 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 04:34:49 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 04:34:49 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 04:34:49 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 04:34:49 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 04:34:49 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 04:34:49 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 04:34:49 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 04:34:49 localhost kernel: NET: Registered PF_XDP protocol family
Nov 29 04:34:49 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 04:34:49 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 04:34:49 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 04:34:49 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 04:34:49 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 04:34:49 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 04:34:49 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 04:34:49 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 98104 usecs
Nov 29 04:34:49 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 29 04:34:49 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 04:34:49 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 04:34:49 localhost kernel: ACPI: bus type thunderbolt registered
Nov 29 04:34:49 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 29 04:34:49 localhost kernel: Initialise system trusted keyrings
Nov 29 04:34:49 localhost kernel: Key type blacklist registered
Nov 29 04:34:49 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 04:34:49 localhost kernel: zbud: loaded
Nov 29 04:34:49 localhost kernel: integrity: Platform Keyring initialized
Nov 29 04:34:49 localhost kernel: integrity: Machine keyring initialized
Nov 29 04:34:49 localhost kernel: Freeing initrd memory: 85868K
Nov 29 04:34:49 localhost kernel: NET: Registered PF_ALG protocol family
Nov 29 04:34:49 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 29 04:34:49 localhost kernel: Key type asymmetric registered
Nov 29 04:34:49 localhost kernel: Asymmetric key parser 'x509' registered
Nov 29 04:34:49 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 04:34:49 localhost kernel: io scheduler mq-deadline registered
Nov 29 04:34:49 localhost kernel: io scheduler kyber registered
Nov 29 04:34:49 localhost kernel: io scheduler bfq registered
Nov 29 04:34:49 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 04:34:49 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 04:34:49 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 04:34:49 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 29 04:34:49 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 04:34:49 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 04:34:49 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 04:34:49 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 04:34:49 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 04:34:49 localhost kernel: Non-volatile memory driver v1.3
Nov 29 04:34:49 localhost kernel: rdac: device handler registered
Nov 29 04:34:49 localhost kernel: hp_sw: device handler registered
Nov 29 04:34:49 localhost kernel: emc: device handler registered
Nov 29 04:34:49 localhost kernel: alua: device handler registered
Nov 29 04:34:49 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 04:34:49 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 04:34:49 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 04:34:49 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 04:34:49 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 04:34:49 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 04:34:49 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 29 04:34:49 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 04:34:49 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 04:34:49 localhost kernel: hub 1-0:1.0: USB hub found
Nov 29 04:34:49 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 29 04:34:49 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 04:34:49 localhost kernel: usbserial: USB Serial support registered for generic
Nov 29 04:34:49 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 04:34:49 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 04:34:49 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 04:34:49 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 04:34:49 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 04:34:49 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 04:34:49 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 04:34:49 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T04:34:48 UTC (1764390888)
Nov 29 04:34:49 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 04:34:49 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 04:34:49 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 04:34:49 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 04:34:49 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 04:34:49 localhost kernel: usbcore: registered new interface driver usbhid
Nov 29 04:34:49 localhost kernel: usbhid: USB HID core driver
Nov 29 04:34:49 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 29 04:34:49 localhost kernel: Initializing XFRM netlink socket
Nov 29 04:34:49 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 29 04:34:49 localhost kernel: Segment Routing with IPv6
Nov 29 04:34:49 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 29 04:34:49 localhost kernel: mpls_gso: MPLS GSO support
Nov 29 04:34:49 localhost kernel: IPI shorthand broadcast: enabled
Nov 29 04:34:49 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 04:34:49 localhost kernel: AES CTR mode by8 optimization enabled
Nov 29 04:34:49 localhost kernel: sched_clock: Marking stable (1189010823, 149767947)->(1457179421, -118400651)
Nov 29 04:34:49 localhost kernel: registered taskstats version 1
Nov 29 04:34:49 localhost kernel: Loading compiled-in X.509 certificates
Nov 29 04:34:49 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 04:34:49 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 04:34:49 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 04:34:49 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 04:34:49 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 04:34:49 localhost kernel: Demotion targets for Node 0: null
Nov 29 04:34:49 localhost kernel: page_owner is disabled
Nov 29 04:34:49 localhost kernel: Key type .fscrypt registered
Nov 29 04:34:49 localhost kernel: Key type fscrypt-provisioning registered
Nov 29 04:34:49 localhost kernel: Key type big_key registered
Nov 29 04:34:49 localhost kernel: Key type encrypted registered
Nov 29 04:34:49 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 04:34:49 localhost kernel: Loading compiled-in module X.509 certificates
Nov 29 04:34:49 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 04:34:49 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 29 04:34:49 localhost kernel: ima: No architecture policies found
Nov 29 04:34:49 localhost kernel: evm: Initialising EVM extended attributes:
Nov 29 04:34:49 localhost kernel: evm: security.selinux
Nov 29 04:34:49 localhost kernel: evm: security.SMACK64 (disabled)
Nov 29 04:34:49 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 04:34:49 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 04:34:49 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 04:34:49 localhost kernel: evm: security.apparmor (disabled)
Nov 29 04:34:49 localhost kernel: evm: security.ima
Nov 29 04:34:49 localhost kernel: evm: security.capability
Nov 29 04:34:49 localhost kernel: evm: HMAC attrs: 0x1
Nov 29 04:34:49 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 04:34:49 localhost kernel: Running certificate verification RSA selftest
Nov 29 04:34:49 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 04:34:49 localhost kernel: Running certificate verification ECDSA selftest
Nov 29 04:34:49 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 04:34:49 localhost kernel: clk: Disabling unused clocks
Nov 29 04:34:49 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 29 04:34:49 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 04:34:49 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 29 04:34:49 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 04:34:49 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 04:34:49 localhost kernel: Run /init as init process
Nov 29 04:34:49 localhost kernel:   with arguments:
Nov 29 04:34:49 localhost kernel:     /init
Nov 29 04:34:49 localhost kernel:   with environment:
Nov 29 04:34:49 localhost kernel:     HOME=/
Nov 29 04:34:49 localhost kernel:     TERM=linux
Nov 29 04:34:49 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 29 04:34:49 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 04:34:49 localhost systemd[1]: Detected virtualization kvm.
Nov 29 04:34:49 localhost systemd[1]: Detected architecture x86-64.
Nov 29 04:34:49 localhost systemd[1]: Running in initrd.
Nov 29 04:34:49 localhost systemd[1]: No hostname configured, using default hostname.
Nov 29 04:34:49 localhost systemd[1]: Hostname set to <localhost>.
Nov 29 04:34:49 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 29 04:34:49 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 04:34:49 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 04:34:49 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 04:34:49 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 29 04:34:49 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 04:34:49 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 04:34:49 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 04:34:49 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 29 04:34:49 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 04:34:49 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 29 04:34:49 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 29 04:34:49 localhost systemd[1]: Reached target Local File Systems.
Nov 29 04:34:49 localhost systemd[1]: Reached target Path Units.
Nov 29 04:34:49 localhost systemd[1]: Reached target Slice Units.
Nov 29 04:34:49 localhost systemd[1]: Reached target Swaps.
Nov 29 04:34:49 localhost systemd[1]: Reached target Timer Units.
Nov 29 04:34:49 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 04:34:49 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 29 04:34:49 localhost systemd[1]: Listening on Journal Socket.
Nov 29 04:34:49 localhost systemd[1]: Listening on udev Control Socket.
Nov 29 04:34:49 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 29 04:34:49 localhost systemd[1]: Reached target Socket Units.
Nov 29 04:34:49 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 29 04:34:49 localhost systemd[1]: Starting Journal Service...
Nov 29 04:34:49 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 04:34:49 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 29 04:34:49 localhost systemd[1]: Starting Create System Users...
Nov 29 04:34:49 localhost systemd[1]: Starting Setup Virtual Console...
Nov 29 04:34:49 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 04:34:49 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 29 04:34:49 localhost systemd[1]: Finished Create System Users.
Nov 29 04:34:49 localhost systemd-journald[306]: Journal started
Nov 29 04:34:49 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/60584de4e08041489fd937c7db79f006) is 8.0M, max 153.6M, 145.6M free.
Nov 29 04:34:49 localhost systemd-sysusers[311]: Creating group 'users' with GID 100.
Nov 29 04:34:49 localhost systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Nov 29 04:34:49 localhost systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 04:34:49 localhost systemd[1]: Started Journal Service.
Nov 29 04:34:49 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 04:34:49 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 04:34:49 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 04:34:49 localhost systemd[1]: Finished Setup Virtual Console.
Nov 29 04:34:49 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 04:34:49 localhost systemd[1]: Starting dracut cmdline hook...
Nov 29 04:34:49 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 04:34:49 localhost dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 04:34:49 localhost dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 04:34:49 localhost systemd[1]: Finished dracut cmdline hook.
Nov 29 04:34:49 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 29 04:34:49 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 04:34:49 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 29 04:34:49 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 04:34:49 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 29 04:34:49 localhost kernel: RPC: Registered udp transport module.
Nov 29 04:34:49 localhost kernel: RPC: Registered tcp transport module.
Nov 29 04:34:49 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 04:34:49 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 04:34:49 localhost rpc.statd[443]: Version 2.5.4 starting
Nov 29 04:34:49 localhost rpc.statd[443]: Initializing NSM state
Nov 29 04:34:49 localhost rpc.idmapd[448]: Setting log level to 0
Nov 29 04:34:49 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 29 04:34:49 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 04:34:49 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 04:34:49 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 04:34:49 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 29 04:34:49 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 29 04:34:49 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 29 04:34:49 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 29 04:34:49 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 04:34:49 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 29 04:34:49 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 04:34:49 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 04:34:49 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 04:34:49 localhost systemd[1]: Reached target Network.
Nov 29 04:34:49 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 04:34:49 localhost systemd[1]: Starting dracut initqueue hook...
Nov 29 04:34:49 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 04:34:49 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 04:34:49 localhost kernel:  vda: vda1
Nov 29 04:34:49 localhost kernel: libata version 3.00 loaded.
Nov 29 04:34:49 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 29 04:34:49 localhost kernel: scsi host0: ata_piix
Nov 29 04:34:49 localhost kernel: scsi host1: ata_piix
Nov 29 04:34:49 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 04:34:49 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 04:34:49 localhost systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 04:34:49 localhost systemd[1]: Reached target Initrd Root Device.
Nov 29 04:34:50 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 29 04:34:50 localhost kernel: ata1: found unknown device (class 0)
Nov 29 04:34:50 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 04:34:50 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 04:34:50 localhost systemd-udevd[493]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:34:50 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 04:34:50 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 04:34:50 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 04:34:50 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 29 04:34:50 localhost systemd[1]: Reached target System Initialization.
Nov 29 04:34:50 localhost systemd[1]: Reached target Basic System.
Nov 29 04:34:50 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 29 04:34:50 localhost systemd[1]: Finished dracut initqueue hook.
Nov 29 04:34:50 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 04:34:50 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 04:34:50 localhost systemd[1]: Reached target Remote File Systems.
Nov 29 04:34:50 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 29 04:34:50 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 29 04:34:50 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 04:34:50 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 04:34:50 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 04:34:50 localhost systemd[1]: Mounting /sysroot...
Nov 29 04:34:50 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 04:34:50 localhost kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 04:34:50 localhost kernel: XFS (vda1): Ending clean mount
Nov 29 04:34:50 localhost systemd[1]: Mounted /sysroot.
Nov 29 04:34:50 localhost systemd[1]: Reached target Initrd Root File System.
Nov 29 04:34:50 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 04:34:50 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 04:34:50 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 04:34:50 localhost systemd[1]: Reached target Initrd File Systems.
Nov 29 04:34:50 localhost systemd[1]: Reached target Initrd Default Target.
Nov 29 04:34:50 localhost systemd[1]: Starting dracut mount hook...
Nov 29 04:34:50 localhost systemd[1]: Finished dracut mount hook.
Nov 29 04:34:50 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 04:34:51 localhost rpc.idmapd[448]: exiting on signal 15
Nov 29 04:34:51 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 04:34:51 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 04:34:51 localhost systemd[1]: Stopped target Network.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Timer Units.
Nov 29 04:34:51 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 04:34:51 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Basic System.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Path Units.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Remote File Systems.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Slice Units.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Socket Units.
Nov 29 04:34:51 localhost systemd[1]: Stopped target System Initialization.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Local File Systems.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Swaps.
Nov 29 04:34:51 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped dracut mount hook.
Nov 29 04:34:51 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 04:34:51 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 04:34:51 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 29 04:34:51 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 29 04:34:51 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 04:34:51 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 04:34:51 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 04:34:51 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 04:34:51 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 29 04:34:51 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 04:34:51 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 04:34:51 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Closed udev Control Socket.
Nov 29 04:34:51 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Closed udev Kernel Socket.
Nov 29 04:34:51 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 29 04:34:51 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 29 04:34:51 localhost systemd[1]: Starting Cleanup udev Database...
Nov 29 04:34:51 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 04:34:51 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 04:34:51 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped Create System Users.
Nov 29 04:34:51 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Finished Cleanup udev Database.
Nov 29 04:34:51 localhost systemd[1]: Reached target Switch Root.
Nov 29 04:34:51 localhost systemd[1]: Starting Switch Root...
Nov 29 04:34:51 localhost systemd[1]: Switching root.
Nov 29 04:34:51 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Nov 29 04:34:51 localhost systemd-journald[306]: Journal stopped
Nov 29 04:34:51 localhost kernel: audit: type=1404 audit(1764390891.243:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 04:34:51 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 04:34:51 localhost kernel: SELinux:  policy capability open_perms=1
Nov 29 04:34:51 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 04:34:51 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 29 04:34:51 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 04:34:51 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 04:34:51 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 04:34:51 localhost kernel: audit: type=1403 audit(1764390891.379:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 04:34:51 localhost systemd[1]: Successfully loaded SELinux policy in 139.192ms.
Nov 29 04:34:51 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.596ms.
Nov 29 04:34:51 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 04:34:51 localhost systemd[1]: Detected virtualization kvm.
Nov 29 04:34:51 localhost systemd[1]: Detected architecture x86-64.
Nov 29 04:34:51 localhost systemd-rc-local-generator[636]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 04:34:51 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped Switch Root.
Nov 29 04:34:51 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 04:34:51 localhost systemd[1]: Created slice Slice /system/getty.
Nov 29 04:34:51 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 29 04:34:51 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 29 04:34:51 localhost systemd[1]: Created slice User and Session Slice.
Nov 29 04:34:51 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 04:34:51 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 29 04:34:51 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 04:34:51 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Switch Root.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 29 04:34:51 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 29 04:34:51 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 29 04:34:51 localhost systemd[1]: Reached target Path Units.
Nov 29 04:34:51 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 29 04:34:51 localhost systemd[1]: Reached target Slice Units.
Nov 29 04:34:51 localhost systemd[1]: Reached target Swaps.
Nov 29 04:34:51 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 29 04:34:51 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 29 04:34:51 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 29 04:34:51 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 29 04:34:51 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 29 04:34:51 localhost systemd[1]: Listening on udev Control Socket.
Nov 29 04:34:51 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 29 04:34:51 localhost systemd[1]: Mounting Huge Pages File System...
Nov 29 04:34:51 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 29 04:34:51 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 29 04:34:51 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 29 04:34:51 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 04:34:51 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 29 04:34:51 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 04:34:51 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 29 04:34:51 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 29 04:34:51 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 29 04:34:51 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 04:34:51 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 29 04:34:51 localhost systemd[1]: Stopped Journal Service.
Nov 29 04:34:51 localhost systemd[1]: Starting Journal Service...
Nov 29 04:34:51 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 04:34:51 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 29 04:34:51 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 04:34:51 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 29 04:34:51 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 04:34:51 localhost kernel: fuse: init (API version 7.37)
Nov 29 04:34:51 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 29 04:34:51 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 04:34:51 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 29 04:34:51 localhost systemd[1]: Mounted Huge Pages File System.
Nov 29 04:34:51 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 29 04:34:51 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 29 04:34:51 localhost systemd-journald[677]: Journal started
Nov 29 04:34:51 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 04:34:51 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 29 04:34:51 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Started Journal Service.
Nov 29 04:34:51 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 29 04:34:51 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 04:34:51 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 04:34:51 localhost kernel: ACPI: bus type drm_connector registered
Nov 29 04:34:51 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 04:34:51 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 29 04:34:51 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 04:34:51 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 29 04:34:51 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 04:34:51 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 04:34:51 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 04:34:51 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 29 04:34:51 localhost systemd[1]: Mounting FUSE Control File System...
Nov 29 04:34:51 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 04:34:51 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 29 04:34:51 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 04:34:51 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 04:34:51 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 04:34:51 localhost systemd[1]: Starting Create System Users...
Nov 29 04:34:51 localhost systemd[1]: Mounted FUSE Control File System.
Nov 29 04:34:51 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 04:34:51 localhost systemd-journald[677]: Received client request to flush runtime journal.
Nov 29 04:34:51 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 04:34:51 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 04:34:51 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 04:34:51 localhost systemd[1]: Finished Create System Users.
Nov 29 04:34:51 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 04:34:51 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 29 04:34:51 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 04:34:51 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 04:34:51 localhost systemd[1]: Reached target Local File Systems.
Nov 29 04:34:52 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 04:34:52 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 04:34:52 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 04:34:52 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 04:34:52 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 04:34:52 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 04:34:52 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 04:34:52 localhost bootctl[694]: Couldn't find EFI system partition, skipping.
Nov 29 04:34:52 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 04:34:52 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 04:34:52 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 04:34:52 localhost systemd[1]: Starting Security Auditing Service...
Nov 29 04:34:52 localhost systemd[1]: Starting RPC Bind...
Nov 29 04:34:52 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 04:34:52 localhost auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 04:34:52 localhost auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 04:34:52 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 04:34:52 localhost augenrules[705]: /sbin/augenrules: No change
Nov 29 04:34:52 localhost augenrules[720]: No rules
Nov 29 04:34:52 localhost augenrules[720]: enabled 1
Nov 29 04:34:52 localhost augenrules[720]: failure 1
Nov 29 04:34:52 localhost augenrules[720]: pid 700
Nov 29 04:34:52 localhost augenrules[720]: rate_limit 0
Nov 29 04:34:52 localhost augenrules[720]: backlog_limit 8192
Nov 29 04:34:52 localhost augenrules[720]: lost 0
Nov 29 04:34:52 localhost augenrules[720]: backlog 3
Nov 29 04:34:52 localhost augenrules[720]: backlog_wait_time 60000
Nov 29 04:34:52 localhost augenrules[720]: backlog_wait_time_actual 0
Nov 29 04:34:52 localhost augenrules[720]: enabled 1
Nov 29 04:34:52 localhost augenrules[720]: failure 1
Nov 29 04:34:52 localhost augenrules[720]: pid 700
Nov 29 04:34:52 localhost augenrules[720]: rate_limit 0
Nov 29 04:34:52 localhost augenrules[720]: backlog_limit 8192
Nov 29 04:34:52 localhost augenrules[720]: lost 0
Nov 29 04:34:52 localhost augenrules[720]: backlog 0
Nov 29 04:34:52 localhost augenrules[720]: backlog_wait_time 60000
Nov 29 04:34:52 localhost augenrules[720]: backlog_wait_time_actual 0
Nov 29 04:34:52 localhost augenrules[720]: enabled 1
Nov 29 04:34:52 localhost augenrules[720]: failure 1
Nov 29 04:34:52 localhost augenrules[720]: pid 700
Nov 29 04:34:52 localhost augenrules[720]: rate_limit 0
Nov 29 04:34:52 localhost augenrules[720]: backlog_limit 8192
Nov 29 04:34:52 localhost augenrules[720]: lost 0
Nov 29 04:34:52 localhost augenrules[720]: backlog 1
Nov 29 04:34:52 localhost augenrules[720]: backlog_wait_time 60000
Nov 29 04:34:52 localhost augenrules[720]: backlog_wait_time_actual 0
Nov 29 04:34:52 localhost systemd[1]: Started Security Auditing Service.
Nov 29 04:34:52 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 04:34:52 localhost systemd[1]: Started RPC Bind.
Nov 29 04:34:52 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 04:34:52 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 29 04:34:52 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 04:34:52 localhost systemd[1]: Starting Update is Completed...
Nov 29 04:34:52 localhost systemd[1]: Finished Update is Completed.
Nov 29 04:34:52 localhost systemd-udevd[728]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 04:34:52 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 04:34:52 localhost systemd[1]: Reached target System Initialization.
Nov 29 04:34:52 localhost systemd[1]: Started dnf makecache --timer.
Nov 29 04:34:52 localhost systemd[1]: Started Daily rotation of log files.
Nov 29 04:34:52 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 04:34:52 localhost systemd[1]: Reached target Timer Units.
Nov 29 04:34:52 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 04:34:52 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 04:34:52 localhost systemd[1]: Reached target Socket Units.
Nov 29 04:34:52 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 29 04:34:52 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 04:34:52 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 29 04:34:52 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 04:34:52 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 29 04:34:52 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 04:34:52 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 29 04:34:52 localhost systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:34:52 localhost systemd[1]: Reached target Basic System.
Nov 29 04:34:52 localhost dbus-broker-lau[743]: Ready
Nov 29 04:34:52 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 04:34:52 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 04:34:52 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 04:34:52 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 04:34:52 localhost systemd[1]: Starting NTP client/server...
Nov 29 04:34:52 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 04:34:52 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 04:34:52 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 04:34:52 localhost systemd[1]: Started irqbalance daemon.
Nov 29 04:34:52 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 04:34:52 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 04:34:52 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 04:34:52 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 04:34:52 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 04:34:52 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 04:34:52 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 29 04:34:52 localhost kernel: Console: switching to colour dummy device 80x25
Nov 29 04:34:52 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 04:34:52 localhost kernel: [drm] features: -context_init
Nov 29 04:34:52 localhost chronyd[785]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 04:34:52 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 04:34:52 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 29 04:34:52 localhost chronyd[785]: Loaded 0 symmetric keys
Nov 29 04:34:52 localhost chronyd[785]: Using right/UTC timezone to obtain leap second data
Nov 29 04:34:52 localhost chronyd[785]: Loaded seccomp filter (level 2)
Nov 29 04:34:52 localhost kernel: [drm] number of scanouts: 1
Nov 29 04:34:52 localhost kernel: [drm] number of cap sets: 0
Nov 29 04:34:52 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 04:34:52 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 04:34:52 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 29 04:34:52 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 04:34:52 localhost systemd[1]: Starting User Login Management...
Nov 29 04:34:52 localhost systemd[1]: Started NTP client/server.
Nov 29 04:34:52 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 04:34:52 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 04:34:52 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 04:34:52 localhost systemd-logind[793]: New seat seat0.
Nov 29 04:34:52 localhost systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 04:34:52 localhost systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 04:34:52 localhost systemd[1]: Started User Login Management.
Nov 29 04:34:52 localhost kernel: kvm_amd: TSC scaling supported
Nov 29 04:34:52 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 29 04:34:52 localhost kernel: kvm_amd: Nested Paging enabled
Nov 29 04:34:52 localhost kernel: kvm_amd: LBR virtualization supported
Nov 29 04:34:52 localhost iptables.init[777]: iptables: Applying firewall rules: [  OK  ]
Nov 29 04:34:53 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 04:34:53 localhost cloud-init[838]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 04:34:53 +0000. Up 5.77 seconds.
Nov 29 04:34:53 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 29 04:34:53 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 29 04:34:53 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp5u03kq9d.mount: Deactivated successfully.
Nov 29 04:34:53 localhost systemd[1]: Starting Hostname Service...
Nov 29 04:34:53 localhost systemd[1]: Started Hostname Service.
Nov 29 04:34:53 np0005539482.novalocal systemd-hostnamed[852]: Hostname set to <np0005539482.novalocal> (static)
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Reached target Preparation for Network.
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Starting Network Manager...
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6486] NetworkManager (version 1.54.1-1.el9) is starting... (boot:919d61e4-148b-4df4-a773-feb4933c1c42)
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6491] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6567] manager[0x5625a18a4080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6606] hostname: hostname: using hostnamed
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6606] hostname: static hostname changed from (none) to "np0005539482.novalocal"
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6612] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6744] manager[0x5625a18a4080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6747] manager[0x5625a18a4080]: rfkill: WWAN hardware radio set enabled
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6790] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6790] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6791] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6794] manager: Networking is enabled by state file
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6796] settings: Loaded settings plugin: keyfile (internal)
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6808] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6833] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6846] dhcp: init: Using DHCP client 'internal'
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6849] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6861] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6868] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6876] device (lo): Activation: starting connection 'lo' (aeac58a6-e034-4337-948c-d58870c36302)
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6885] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6889] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6916] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6921] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6923] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6925] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6927] device (eth0): carrier: link connected
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6931] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6937] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6942] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6945] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6946] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6949] manager: NetworkManager state is now CONNECTING
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6951] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6957] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.6960] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Started Network Manager.
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Reached target Network.
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7236] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7239] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7248] device (lo): Activation: successful, device activated.
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Reached target NFS client services.
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Reached target Remote File Systems.
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7616] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7629] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7652] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7670] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7671] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7674] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7677] device (eth0): Activation: successful, device activated.
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7683] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 04:34:53 np0005539482.novalocal NetworkManager[856]: <info>  [1764390893.7686] manager: startup complete
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 29 04:34:53 np0005539482.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 04:34:54 +0000. Up 6.73 seconds.
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: |  eth0  | True |         38.102.83.17         | 255.255.255.0 | global | fa:16:3e:1f:f5:ec |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe1f:f5ec/64 |       .       |  link  | fa:16:3e:1f:f5:ec |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 04:34:54 np0005539482.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 04:34:54 np0005539482.novalocal useradd[986]: new group: name=cloud-user, GID=1001
Nov 29 04:34:54 np0005539482.novalocal useradd[986]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 29 04:34:54 np0005539482.novalocal useradd[986]: add 'cloud-user' to group 'adm'
Nov 29 04:34:54 np0005539482.novalocal useradd[986]: add 'cloud-user' to group 'systemd-journal'
Nov 29 04:34:54 np0005539482.novalocal useradd[986]: add 'cloud-user' to shadow group 'adm'
Nov 29 04:34:54 np0005539482.novalocal useradd[986]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: Generating public/private rsa key pair.
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: The key fingerprint is:
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: SHA256:ALoIgfLCj6dBFoGXeKG95UMEr9gW+lngdAWyDGR51Hw root@np0005539482.novalocal
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: The key's randomart image is:
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: +---[RSA 3072]----+
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |*=B=*..          |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |*O+B = E         |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |=+@ = o          |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |o@.@   .         |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |*.X +   S        |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: | = = .           |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |  *              |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: | .               |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |                 |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: Generating public/private ecdsa key pair.
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: The key fingerprint is:
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: SHA256:szS6/azr6zWLNGOc3qRPjtOudZU71M2G+GzB3LRQz4U root@np0005539482.novalocal
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: The key's randomart image is:
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: +---[ECDSA 256]---+
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |               o.|
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |              E.o|
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |             .  +|
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |             +.Bo|
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |        S   . O.*|
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |       + =   = + |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |      . O.* . *  |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |       *.#.+ . . |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |      .o&XO      |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: Generating public/private ed25519 key pair.
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: The key fingerprint is:
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: SHA256:GAJwerN00ozBIJJYeYgzslfG5EvwLlOkgCp+SxnR760 root@np0005539482.novalocal
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: The key's randomart image is:
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: +--[ED25519 256]--+
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |OB*=+            |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |@++&=.           |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |++B+@ o          |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |+o.X o +         |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |o.+ = o S        |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: | . *   . .       |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |  o .   .        |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |   .   E         |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: |                 |
Nov 29 04:34:55 np0005539482.novalocal cloud-init[920]: +----[SHA256]-----+
Nov 29 04:34:55 np0005539482.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 04:34:55 np0005539482.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 29 04:34:55 np0005539482.novalocal systemd[1]: Reached target Network is Online.
Nov 29 04:34:55 np0005539482.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 04:34:55 np0005539482.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 29 04:34:55 np0005539482.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Starting System Logging Service...
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 29 04:34:56 np0005539482.novalocal sm-notify[1002]: Version 2.5.4 starting
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Starting Permit User Sessions...
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 29 04:34:56 np0005539482.novalocal sshd[1004]: Server listening on 0.0.0.0 port 22.
Nov 29 04:34:56 np0005539482.novalocal sshd[1004]: Server listening on :: port 22.
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Finished Permit User Sessions.
Nov 29 04:34:56 np0005539482.novalocal rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Nov 29 04:34:56 np0005539482.novalocal rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Started Command Scheduler.
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Started Getty on tty1.
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Reached target Login Prompts.
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Started System Logging Service.
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Reached target Multi-User System.
Nov 29 04:34:56 np0005539482.novalocal crond[1009]: (CRON) STARTUP (1.5.7)
Nov 29 04:34:56 np0005539482.novalocal crond[1009]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 29 04:34:56 np0005539482.novalocal crond[1009]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 95% if used.)
Nov 29 04:34:56 np0005539482.novalocal crond[1009]: (CRON) INFO (running with inotify support)
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 04:34:56 np0005539482.novalocal sshd-session[1025]: Unable to negotiate with 38.102.83.114 port 47812: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 04:34:56 np0005539482.novalocal sshd-session[1033]: Connection reset by 38.102.83.114 port 47822 [preauth]
Nov 29 04:34:56 np0005539482.novalocal sshd-session[1042]: Unable to negotiate with 38.102.83.114 port 47832: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 29 04:34:56 np0005539482.novalocal sshd-session[1053]: Unable to negotiate with 38.102.83.114 port 47838: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 29 04:34:56 np0005539482.novalocal rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:34:56 np0005539482.novalocal sshd-session[1007]: Connection closed by 38.102.83.114 port 47810 [preauth]
Nov 29 04:34:56 np0005539482.novalocal kdumpctl[1012]: kdump: No kdump initial ramdisk found.
Nov 29 04:34:56 np0005539482.novalocal kdumpctl[1012]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 04:34:56 np0005539482.novalocal sshd-session[1079]: Unable to negotiate with 38.102.83.114 port 47864: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 29 04:34:56 np0005539482.novalocal sshd-session[1084]: Unable to negotiate with 38.102.83.114 port 47878: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 29 04:34:56 np0005539482.novalocal sshd-session[1062]: Connection closed by 38.102.83.114 port 47854 [preauth]
Nov 29 04:34:56 np0005539482.novalocal sshd-session[1073]: Connection closed by 38.102.83.114 port 47858 [preauth]
Nov 29 04:34:56 np0005539482.novalocal cloud-init[1139]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 04:34:56 +0000. Up 8.91 seconds.
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 04:34:56 np0005539482.novalocal dracut[1284]: dracut-057-102.git20250818.el9
Nov 29 04:34:56 np0005539482.novalocal cloud-init[1300]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 04:34:56 +0000. Up 9.30 seconds.
Nov 29 04:34:56 np0005539482.novalocal cloud-init[1302]: #############################################################
Nov 29 04:34:56 np0005539482.novalocal cloud-init[1303]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 04:34:56 np0005539482.novalocal cloud-init[1305]: 256 SHA256:szS6/azr6zWLNGOc3qRPjtOudZU71M2G+GzB3LRQz4U root@np0005539482.novalocal (ECDSA)
Nov 29 04:34:56 np0005539482.novalocal cloud-init[1307]: 256 SHA256:GAJwerN00ozBIJJYeYgzslfG5EvwLlOkgCp+SxnR760 root@np0005539482.novalocal (ED25519)
Nov 29 04:34:56 np0005539482.novalocal cloud-init[1311]: 3072 SHA256:ALoIgfLCj6dBFoGXeKG95UMEr9gW+lngdAWyDGR51Hw root@np0005539482.novalocal (RSA)
Nov 29 04:34:56 np0005539482.novalocal cloud-init[1312]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 04:34:56 np0005539482.novalocal cloud-init[1313]: #############################################################
Nov 29 04:34:56 np0005539482.novalocal dracut[1286]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 04:34:56 np0005539482.novalocal cloud-init[1300]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 04:34:56 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.51 seconds
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 04:34:56 np0005539482.novalocal systemd[1]: Reached target Cloud-init target.
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: memstrack is not available
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 04:34:57 np0005539482.novalocal dracut[1286]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: memstrack is not available
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: *** Including module: systemd ***
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: *** Including module: fips ***
Nov 29 04:34:58 np0005539482.novalocal chronyd[785]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Nov 29 04:34:58 np0005539482.novalocal chronyd[785]: System clock TAI offset set to 37 seconds
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: *** Including module: systemd-initrd ***
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: *** Including module: i18n ***
Nov 29 04:34:58 np0005539482.novalocal dracut[1286]: *** Including module: drm ***
Nov 29 04:34:59 np0005539482.novalocal dracut[1286]: *** Including module: prefixdevname ***
Nov 29 04:34:59 np0005539482.novalocal dracut[1286]: *** Including module: kernel-modules ***
Nov 29 04:34:59 np0005539482.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]: *** Including module: kernel-modules-extra ***
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]: *** Including module: qemu ***
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]: *** Including module: fstab-sys ***
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]: *** Including module: rootfs-block ***
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]: *** Including module: terminfo ***
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]: *** Including module: udev-rules ***
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]: Skipping udev rule: 91-permissions.rules
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]: *** Including module: virtiofs ***
Nov 29 04:35:00 np0005539482.novalocal dracut[1286]: *** Including module: dracut-systemd ***
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]: *** Including module: usrmount ***
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]: *** Including module: base ***
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]: *** Including module: fs-lib ***
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]: *** Including module: kdumpbase ***
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:   microcode_ctl module: mangling fw_dir
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: configuration "intel" is ignored
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 04:35:01 np0005539482.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]: *** Including module: openssl ***
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]: *** Including module: shutdown ***
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]: *** Including module: squash ***
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]: *** Including modules done ***
Nov 29 04:35:02 np0005539482.novalocal dracut[1286]: *** Installing kernel module dependencies ***
Nov 29 04:35:03 np0005539482.novalocal dracut[1286]: *** Installing kernel module dependencies done ***
Nov 29 04:35:03 np0005539482.novalocal dracut[1286]: *** Resolving executable dependencies ***
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: IRQ 25 affinity is now unmanaged
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: IRQ 31 affinity is now unmanaged
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: IRQ 28 affinity is now unmanaged
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: IRQ 32 affinity is now unmanaged
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: IRQ 30 affinity is now unmanaged
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 04:35:03 np0005539482.novalocal irqbalance[782]: IRQ 29 affinity is now unmanaged
Nov 29 04:35:03 np0005539482.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 04:35:04 np0005539482.novalocal dracut[1286]: *** Resolving executable dependencies done ***
Nov 29 04:35:04 np0005539482.novalocal dracut[1286]: *** Generating early-microcode cpio image ***
Nov 29 04:35:04 np0005539482.novalocal dracut[1286]: *** Store current command line parameters ***
Nov 29 04:35:04 np0005539482.novalocal dracut[1286]: Stored kernel commandline:
Nov 29 04:35:04 np0005539482.novalocal dracut[1286]: No dracut internal kernel commandline stored in the initramfs
Nov 29 04:35:05 np0005539482.novalocal dracut[1286]: *** Install squash loader ***
Nov 29 04:35:05 np0005539482.novalocal dracut[1286]: *** Squashing the files inside the initramfs ***
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: *** Squashing the files inside the initramfs done ***
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: *** Hardlinking files ***
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: Mode:           real
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: Files:          50
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: Linked:         0 files
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: Compared:       0 xattrs
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: Compared:       0 files
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: Saved:          0 B
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: Duration:       0.000542 seconds
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: *** Hardlinking files done ***
Nov 29 04:35:07 np0005539482.novalocal dracut[1286]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 04:35:07 np0005539482.novalocal kdumpctl[1012]: kdump: kexec: loaded kdump kernel
Nov 29 04:35:07 np0005539482.novalocal kdumpctl[1012]: kdump: Starting kdump: [OK]
Nov 29 04:35:07 np0005539482.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 29 04:35:07 np0005539482.novalocal systemd[1]: Startup finished in 1.506s (kernel) + 2.388s (initrd) + 16.697s (userspace) = 20.593s.
Nov 29 04:35:09 np0005539482.novalocal sshd-session[4294]: Accepted publickey for zuul from 38.102.83.114 port 58048 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 29 04:35:09 np0005539482.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 29 04:35:09 np0005539482.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 04:35:09 np0005539482.novalocal systemd-logind[793]: New session 1 of user zuul.
Nov 29 04:35:09 np0005539482.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 04:35:09 np0005539482.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 29 04:35:09 np0005539482.novalocal systemd[4298]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Queued start job for default target Main User Target.
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Created slice User Application Slice.
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Reached target Paths.
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Reached target Timers.
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Starting D-Bus User Message Bus Socket...
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Starting Create User's Volatile Files and Directories...
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Listening on D-Bus User Message Bus Socket.
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Reached target Sockets.
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Finished Create User's Volatile Files and Directories.
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Reached target Basic System.
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Reached target Main User Target.
Nov 29 04:35:10 np0005539482.novalocal systemd[4298]: Startup finished in 127ms.
Nov 29 04:35:10 np0005539482.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 29 04:35:10 np0005539482.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 29 04:35:10 np0005539482.novalocal sshd-session[4294]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 04:35:10 np0005539482.novalocal python3[4380]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 04:35:11 np0005539482.novalocal sshd-session[4385]: Received disconnect from 190.0.247.85 port 52246:11: Bye Bye [preauth]
Nov 29 04:35:11 np0005539482.novalocal sshd-session[4385]: Disconnected from authenticating user root 190.0.247.85 port 52246 [preauth]
Nov 29 04:35:13 np0005539482.novalocal python3[4410]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 04:35:19 np0005539482.novalocal python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 04:35:19 np0005539482.novalocal python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 04:35:21 np0005539482.novalocal python3[4534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDA8z7osgMfJ2V68AJKFgst/U0KXcc4VJrmzfWSwLCAOfFr1nGizEz1bHmhD5AP5T+NQF48QPTWJekwRWtTol+JQ7PPjXRnDneG8Q/rPEXMV2aBfw+3PdEYOOVD6H6t3kKlftuipUslUTns+Kva4yhOhX5u0owj67mG7GhRjdDLVIjB4JT88BhrqcF4m+AhhAAafKmQDudMb4CcmFRv0Ibb5iSOiJDB0jz7EoZa+1AeLksNBfhUsPuIc0uQ1aWze7thVlS8tvR1hTZKkPl72zSegthkyER8OF8wDl9qNZuzw5fYSCpr18IOUzTnbmv4OJ5N/fQwqgMNsgk+87085SfBPwAVUYlpmbK4CCxoqyMKRb2ShJEW2WVJd0ltBSOt1mizhuV9wd7pwv9DAxGfXKMuPyoMjiXGnKPqW1VnPiNHqvEacoOu/9XDRotLT29O4JNnTQpvIVEhEXytI5BzdLE9t3NXwle/rUM4j91OvyZXNLCuWOpC5JfBokuv5++nJNs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:22 np0005539482.novalocal python3[4558]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:22 np0005539482.novalocal python3[4657]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:35:23 np0005539482.novalocal python3[4728]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764390922.424452-207-215796478777749/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=c10d28bb2bab4e67bcc34b3958ef9bbe_id_rsa follow=False checksum=22cfbedb31c632b8064e31452f96b846a8515459 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:23 np0005539482.novalocal python3[4851]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:35:23 np0005539482.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 04:35:24 np0005539482.novalocal python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764390923.3509195-240-189515479250417/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=c10d28bb2bab4e67bcc34b3958ef9bbe_id_rsa.pub follow=False checksum=c333e9f91b79a81adf6caca410967b32009e7daa backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:25 np0005539482.novalocal python3[4972]: ansible-ping Invoked with data=pong
Nov 29 04:35:26 np0005539482.novalocal python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 04:35:27 np0005539482.novalocal python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 04:35:28 np0005539482.novalocal python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:29 np0005539482.novalocal python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:29 np0005539482.novalocal python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:29 np0005539482.novalocal python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:30 np0005539482.novalocal python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:30 np0005539482.novalocal python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:31 np0005539482.novalocal sudo[5230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hslsxsgfxguxktcbfyoslklsasjcvqkv ; /usr/bin/python3'
Nov 29 04:35:31 np0005539482.novalocal sudo[5230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:35:31 np0005539482.novalocal python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:31 np0005539482.novalocal sudo[5230]: pam_unix(sudo:session): session closed for user root
Nov 29 04:35:32 np0005539482.novalocal sudo[5308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwxvxpghhjepqxqnbjkudsfuhejsmxvf ; /usr/bin/python3'
Nov 29 04:35:32 np0005539482.novalocal sudo[5308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:35:32 np0005539482.novalocal python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:35:32 np0005539482.novalocal sudo[5308]: pam_unix(sudo:session): session closed for user root
Nov 29 04:35:32 np0005539482.novalocal sudo[5381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-busbtuqrkhgjddguoykqwcyyufiuokzv ; /usr/bin/python3'
Nov 29 04:35:32 np0005539482.novalocal sudo[5381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:35:32 np0005539482.novalocal python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764390932.010001-21-198941903856383/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:32 np0005539482.novalocal sudo[5381]: pam_unix(sudo:session): session closed for user root
Nov 29 04:35:33 np0005539482.novalocal irqbalance[782]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 29 04:35:33 np0005539482.novalocal irqbalance[782]: IRQ 26 affinity is now unmanaged
Nov 29 04:35:33 np0005539482.novalocal python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:33 np0005539482.novalocal python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:34 np0005539482.novalocal python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:34 np0005539482.novalocal python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:34 np0005539482.novalocal python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:34 np0005539482.novalocal python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:35 np0005539482.novalocal python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:35 np0005539482.novalocal python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:35 np0005539482.novalocal python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:35 np0005539482.novalocal python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:36 np0005539482.novalocal python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:36 np0005539482.novalocal python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:36 np0005539482.novalocal python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:36 np0005539482.novalocal python3[5745]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:37 np0005539482.novalocal python3[5769]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:37 np0005539482.novalocal python3[5793]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:37 np0005539482.novalocal python3[5817]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:37 np0005539482.novalocal python3[5841]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:38 np0005539482.novalocal python3[5865]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:38 np0005539482.novalocal python3[5889]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:38 np0005539482.novalocal python3[5913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:38 np0005539482.novalocal sshd-session[5720]: Invalid user deploy from 101.47.141.125 port 53928
Nov 29 04:35:39 np0005539482.novalocal python3[5937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:39 np0005539482.novalocal sshd-session[5939]: error: kex_exchange_identification: read: Connection reset by peer
Nov 29 04:35:39 np0005539482.novalocal sshd-session[5939]: Connection reset by 45.140.17.97 port 19552
Nov 29 04:35:39 np0005539482.novalocal python3[5963]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:39 np0005539482.novalocal sshd-session[5720]: Received disconnect from 101.47.141.125 port 53928:11: Bye Bye [preauth]
Nov 29 04:35:39 np0005539482.novalocal sshd-session[5720]: Disconnected from invalid user deploy 101.47.141.125 port 53928 [preauth]
Nov 29 04:35:39 np0005539482.novalocal python3[5987]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:39 np0005539482.novalocal python3[6011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:40 np0005539482.novalocal python3[6035]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:35:43 np0005539482.novalocal sudo[6059]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usgremjixvotadxvvnfqjgmwnjnsmfom ; /usr/bin/python3'
Nov 29 04:35:43 np0005539482.novalocal sudo[6059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:35:43 np0005539482.novalocal python3[6061]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 04:35:43 np0005539482.novalocal systemd[1]: Starting Time & Date Service...
Nov 29 04:35:43 np0005539482.novalocal systemd[1]: Started Time & Date Service.
Nov 29 04:35:43 np0005539482.novalocal systemd-timedated[6063]: Changed time zone to 'UTC' (UTC).
Nov 29 04:35:43 np0005539482.novalocal sudo[6059]: pam_unix(sudo:session): session closed for user root
Nov 29 04:35:43 np0005539482.novalocal sudo[6090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atghzcjggmuckpoazvdmakuscaaahwsq ; /usr/bin/python3'
Nov 29 04:35:43 np0005539482.novalocal sudo[6090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:35:43 np0005539482.novalocal python3[6092]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:43 np0005539482.novalocal sudo[6090]: pam_unix(sudo:session): session closed for user root
Nov 29 04:35:44 np0005539482.novalocal python3[6168]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:35:44 np0005539482.novalocal python3[6239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764390943.9007714-153-83171242602396/source _original_basename=tmpbyockkw4 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:45 np0005539482.novalocal python3[6339]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:35:45 np0005539482.novalocal python3[6410]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764390944.790436-183-149303766224578/source _original_basename=tmp4irdtlv_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:45 np0005539482.novalocal sudo[6510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uydcyptmoxkaefzkzgkbpsjwyhdunqug ; /usr/bin/python3'
Nov 29 04:35:45 np0005539482.novalocal sudo[6510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:35:46 np0005539482.novalocal python3[6512]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:35:46 np0005539482.novalocal sudo[6510]: pam_unix(sudo:session): session closed for user root
Nov 29 04:35:46 np0005539482.novalocal sudo[6583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvrbveyghzlhuexfbyzsjtdqxcwfwbqg ; /usr/bin/python3'
Nov 29 04:35:46 np0005539482.novalocal sudo[6583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:35:46 np0005539482.novalocal python3[6585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764390945.8148494-231-52705442630964/source _original_basename=tmp89kk4x56 follow=False checksum=673d2f3d6c56c6a6f0fd71b2f865eaf754405451 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:46 np0005539482.novalocal sudo[6583]: pam_unix(sudo:session): session closed for user root
Nov 29 04:35:46 np0005539482.novalocal python3[6635]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:35:47 np0005539482.novalocal python3[6661]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:35:47 np0005539482.novalocal sshd-session[6610]: Received disconnect from 176.109.67.96 port 39760:11: Bye Bye [preauth]
Nov 29 04:35:47 np0005539482.novalocal sshd-session[6610]: Disconnected from authenticating user root 176.109.67.96 port 39760 [preauth]
Nov 29 04:35:47 np0005539482.novalocal sudo[6739]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apsadpzlrfsxymwymywaviisdizqgvod ; /usr/bin/python3'
Nov 29 04:35:47 np0005539482.novalocal sudo[6739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:35:47 np0005539482.novalocal python3[6741]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:35:47 np0005539482.novalocal sudo[6739]: pam_unix(sudo:session): session closed for user root
Nov 29 04:35:47 np0005539482.novalocal sudo[6812]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdxjoxullhmzzvcimpozqyfydbxtcehj ; /usr/bin/python3'
Nov 29 04:35:47 np0005539482.novalocal sudo[6812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:35:47 np0005539482.novalocal python3[6814]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764390947.3701653-273-261965661699807/source _original_basename=tmpzahxeyn2 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:47 np0005539482.novalocal sudo[6812]: pam_unix(sudo:session): session closed for user root
Nov 29 04:35:48 np0005539482.novalocal sudo[6863]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xobuiihzpsvydvctepawaaxnjkerbbyv ; /usr/bin/python3'
Nov 29 04:35:48 np0005539482.novalocal sudo[6863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:35:48 np0005539482.novalocal python3[6865]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-5c3c-8300-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:35:48 np0005539482.novalocal sudo[6863]: pam_unix(sudo:session): session closed for user root
Nov 29 04:35:49 np0005539482.novalocal python3[6893]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-5c3c-8300-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 04:35:50 np0005539482.novalocal python3[6922]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:35:53 np0005539482.novalocal irqbalance[782]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 29 04:35:53 np0005539482.novalocal irqbalance[782]: IRQ 27 affinity is now unmanaged
Nov 29 04:36:02 np0005539482.novalocal sshd-session[6923]: Invalid user ubuntu from 52.224.240.74 port 43110
Nov 29 04:36:02 np0005539482.novalocal sshd-session[6923]: Received disconnect from 52.224.240.74 port 43110:11: Bye Bye [preauth]
Nov 29 04:36:02 np0005539482.novalocal sshd-session[6923]: Disconnected from invalid user ubuntu 52.224.240.74 port 43110 [preauth]
Nov 29 04:36:09 np0005539482.novalocal sudo[6948]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezxswzshsymjadzqoiewolmojlwesxuj ; /usr/bin/python3'
Nov 29 04:36:09 np0005539482.novalocal sudo[6948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:36:09 np0005539482.novalocal python3[6950]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:36:09 np0005539482.novalocal sudo[6948]: pam_unix(sudo:session): session closed for user root
Nov 29 04:36:13 np0005539482.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 04:36:24 np0005539482.novalocal sshd-session[6953]: Received disconnect from 190.0.247.85 port 56262:11: Bye Bye [preauth]
Nov 29 04:36:24 np0005539482.novalocal sshd-session[6953]: Disconnected from authenticating user root 190.0.247.85 port 56262 [preauth]
Nov 29 04:36:42 np0005539482.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 04:36:42 np0005539482.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 04:36:42 np0005539482.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 04:36:42 np0005539482.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 04:36:42 np0005539482.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 04:36:42 np0005539482.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 04:36:42 np0005539482.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 04:36:42 np0005539482.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 04:36:42 np0005539482.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 04:36:42 np0005539482.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 04:36:43 np0005539482.novalocal NetworkManager[856]: <info>  [1764391003.0430] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 04:36:43 np0005539482.novalocal systemd-udevd[6955]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 04:36:43 np0005539482.novalocal NetworkManager[856]: <info>  [1764391003.0607] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 04:36:43 np0005539482.novalocal NetworkManager[856]: <info>  [1764391003.0633] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 04:36:43 np0005539482.novalocal NetworkManager[856]: <info>  [1764391003.0638] device (eth1): carrier: link connected
Nov 29 04:36:43 np0005539482.novalocal NetworkManager[856]: <info>  [1764391003.0640] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 04:36:43 np0005539482.novalocal NetworkManager[856]: <info>  [1764391003.0647] policy: auto-activating connection 'Wired connection 1' (68471d98-bb78-39be-9a57-275a98f2e1d6)
Nov 29 04:36:43 np0005539482.novalocal NetworkManager[856]: <info>  [1764391003.0651] device (eth1): Activation: starting connection 'Wired connection 1' (68471d98-bb78-39be-9a57-275a98f2e1d6)
Nov 29 04:36:43 np0005539482.novalocal NetworkManager[856]: <info>  [1764391003.0653] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 04:36:43 np0005539482.novalocal NetworkManager[856]: <info>  [1764391003.0656] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 04:36:43 np0005539482.novalocal NetworkManager[856]: <info>  [1764391003.0661] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 04:36:43 np0005539482.novalocal NetworkManager[856]: <info>  [1764391003.0666] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 04:36:43 np0005539482.novalocal python3[6982]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-7f1d-78cb-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:36:52 np0005539482.novalocal sshd-session[6985]: Invalid user zjw from 176.109.67.96 port 40766
Nov 29 04:36:52 np0005539482.novalocal sshd-session[6985]: Received disconnect from 176.109.67.96 port 40766:11: Bye Bye [preauth]
Nov 29 04:36:52 np0005539482.novalocal sshd-session[6985]: Disconnected from invalid user zjw 176.109.67.96 port 40766 [preauth]
Nov 29 04:36:53 np0005539482.novalocal sudo[7062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhaxwgomvfnvckxpbvyachbgaalnnctn ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 04:36:53 np0005539482.novalocal sudo[7062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:36:53 np0005539482.novalocal python3[7064]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:36:53 np0005539482.novalocal sudo[7062]: pam_unix(sudo:session): session closed for user root
Nov 29 04:36:53 np0005539482.novalocal sudo[7135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyphrzisvimludxuqjcsacuoaqgkeiem ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 04:36:53 np0005539482.novalocal sudo[7135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:36:54 np0005539482.novalocal python3[7137]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764391013.3792095-102-28705876702986/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=313ee6c5e98aa318ee46868b9de42aec2db266a7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:36:54 np0005539482.novalocal sudo[7135]: pam_unix(sudo:session): session closed for user root
Nov 29 04:36:54 np0005539482.novalocal sudo[7185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhmgjooinmwviqjqxhwwycaklsghbpcy ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 04:36:54 np0005539482.novalocal sudo[7185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:36:54 np0005539482.novalocal python3[7187]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 04:36:54 np0005539482.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 04:36:54 np0005539482.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 29 04:36:54 np0005539482.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 29 04:36:54 np0005539482.novalocal NetworkManager[856]: <info>  [1764391014.9098] caught SIGTERM, shutting down normally.
Nov 29 04:36:54 np0005539482.novalocal systemd[1]: Stopping Network Manager...
Nov 29 04:36:54 np0005539482.novalocal NetworkManager[856]: <info>  [1764391014.9108] dhcp4 (eth0): canceled DHCP transaction
Nov 29 04:36:54 np0005539482.novalocal NetworkManager[856]: <info>  [1764391014.9108] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 04:36:54 np0005539482.novalocal NetworkManager[856]: <info>  [1764391014.9109] dhcp4 (eth0): state changed no lease
Nov 29 04:36:54 np0005539482.novalocal NetworkManager[856]: <info>  [1764391014.9113] manager: NetworkManager state is now CONNECTING
Nov 29 04:36:54 np0005539482.novalocal NetworkManager[856]: <info>  [1764391014.9213] dhcp4 (eth1): canceled DHCP transaction
Nov 29 04:36:54 np0005539482.novalocal NetworkManager[856]: <info>  [1764391014.9214] dhcp4 (eth1): state changed no lease
Nov 29 04:36:54 np0005539482.novalocal NetworkManager[856]: <info>  [1764391014.9281] exiting (success)
Nov 29 04:36:54 np0005539482.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 04:36:54 np0005539482.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 04:36:54 np0005539482.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 04:36:54 np0005539482.novalocal systemd[1]: Stopped Network Manager.
Nov 29 04:36:54 np0005539482.novalocal systemd[1]: Starting Network Manager...
Nov 29 04:36:54 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391014.9929] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:919d61e4-148b-4df4-a773-feb4933c1c42)
Nov 29 04:36:54 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391014.9931] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 04:36:54 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391014.9996] manager[0x5562679f0070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 04:36:55 np0005539482.novalocal systemd[1]: Starting Hostname Service...
Nov 29 04:36:55 np0005539482.novalocal systemd[1]: Started Hostname Service.
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1165] hostname: hostname: using hostnamed
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1166] hostname: static hostname changed from (none) to "np0005539482.novalocal"
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1173] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1179] manager[0x5562679f0070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1179] manager[0x5562679f0070]: rfkill: WWAN hardware radio set enabled
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1223] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1223] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1224] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1225] manager: Networking is enabled by state file
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1230] settings: Loaded settings plugin: keyfile (internal)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1237] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1282] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1299] dhcp: init: Using DHCP client 'internal'
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1304] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1313] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1326] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1340] device (lo): Activation: starting connection 'lo' (aeac58a6-e034-4337-948c-d58870c36302)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1352] device (eth0): carrier: link connected
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1360] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1371] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1373] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1386] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1399] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1410] device (eth1): carrier: link connected
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1417] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1427] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (68471d98-bb78-39be-9a57-275a98f2e1d6) (indicated)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1427] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1439] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1451] device (eth1): Activation: starting connection 'Wired connection 1' (68471d98-bb78-39be-9a57-275a98f2e1d6)
Nov 29 04:36:55 np0005539482.novalocal systemd[1]: Started Network Manager.
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1462] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1470] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1477] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1481] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1488] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1494] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1499] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1505] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1511] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1523] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1533] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1547] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1555] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1580] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1589] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1599] device (lo): Activation: successful, device activated.
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1611] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1636] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 04:36:55 np0005539482.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1727] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal sudo[7185]: pam_unix(sudo:session): session closed for user root
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1764] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1767] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1772] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1777] device (eth0): Activation: successful, device activated.
Nov 29 04:36:55 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391015.1784] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 04:36:55 np0005539482.novalocal python3[7273]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-7f1d-78cb-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:37:05 np0005539482.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 04:37:25 np0005539482.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 04:37:33 np0005539482.novalocal sshd-session[7278]: Invalid user ventas01 from 190.0.247.85 port 51724
Nov 29 04:37:33 np0005539482.novalocal sshd-session[7278]: Received disconnect from 190.0.247.85 port 51724:11: Bye Bye [preauth]
Nov 29 04:37:33 np0005539482.novalocal sshd-session[7278]: Disconnected from invalid user ventas01 190.0.247.85 port 51724 [preauth]
Nov 29 04:37:39 np0005539482.novalocal sshd-session[7280]: Invalid user testuser from 52.224.240.74 port 33154
Nov 29 04:37:39 np0005539482.novalocal sshd-session[7280]: Received disconnect from 52.224.240.74 port 33154:11: Bye Bye [preauth]
Nov 29 04:37:39 np0005539482.novalocal sshd-session[7280]: Disconnected from invalid user testuser 52.224.240.74 port 33154 [preauth]
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3469] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 04:37:40 np0005539482.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 04:37:40 np0005539482.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3768] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3771] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3777] device (eth1): Activation: successful, device activated.
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3781] manager: startup complete
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3783] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <warn>  [1764391060.3786] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3793] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 04:37:40 np0005539482.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3933] dhcp4 (eth1): canceled DHCP transaction
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3934] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3934] dhcp4 (eth1): state changed no lease
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3949] policy: auto-activating connection 'ci-private-network' (ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4)
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3953] device (eth1): Activation: starting connection 'ci-private-network' (ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4)
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3954] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3956] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3962] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.3969] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.4001] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.4003] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 04:37:40 np0005539482.novalocal NetworkManager[7200]: <info>  [1764391060.4008] device (eth1): Activation: successful, device activated.
Nov 29 04:37:50 np0005539482.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 04:37:52 np0005539482.novalocal sudo[7380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nubcnxbjpxwdivyuxtrouhsbcuucsmbn ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 04:37:52 np0005539482.novalocal sudo[7380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:37:52 np0005539482.novalocal python3[7382]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:37:53 np0005539482.novalocal sudo[7380]: pam_unix(sudo:session): session closed for user root
Nov 29 04:37:53 np0005539482.novalocal sudo[7453]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raamdmeswjczypamrkjybsciagapmzsv ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 29 04:37:53 np0005539482.novalocal sudo[7453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:37:53 np0005539482.novalocal python3[7455]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764391072.6651232-267-220928140765279/source _original_basename=tmp6z37wb1h follow=False checksum=2dfbd593b187155bf8a3fd475333efd17513319b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:37:53 np0005539482.novalocal sudo[7453]: pam_unix(sudo:session): session closed for user root
Nov 29 04:37:58 np0005539482.novalocal sshd-session[7480]: Invalid user ubuntu from 176.109.67.96 port 56878
Nov 29 04:37:58 np0005539482.novalocal sshd-session[7480]: Received disconnect from 176.109.67.96 port 56878:11: Bye Bye [preauth]
Nov 29 04:37:58 np0005539482.novalocal sshd-session[7480]: Disconnected from invalid user ubuntu 176.109.67.96 port 56878 [preauth]
Nov 29 04:38:04 np0005539482.novalocal systemd[4298]: Starting Mark boot as successful...
Nov 29 04:38:04 np0005539482.novalocal systemd[4298]: Finished Mark boot as successful.
Nov 29 04:38:08 np0005539482.novalocal sshd-session[7482]: Connection closed by 101.47.141.125 port 56308 [preauth]
Nov 29 04:38:44 np0005539482.novalocal sshd-session[7485]: Received disconnect from 190.0.247.85 port 48216:11: Bye Bye [preauth]
Nov 29 04:38:44 np0005539482.novalocal sshd-session[7485]: Disconnected from authenticating user root 190.0.247.85 port 48216 [preauth]
Nov 29 04:38:53 np0005539482.novalocal sshd-session[4307]: Received disconnect from 38.102.83.114 port 58048:11: disconnected by user
Nov 29 04:38:53 np0005539482.novalocal sshd-session[4307]: Disconnected from user zuul 38.102.83.114 port 58048
Nov 29 04:38:53 np0005539482.novalocal sshd-session[4294]: pam_unix(sshd:session): session closed for user zuul
Nov 29 04:38:53 np0005539482.novalocal systemd-logind[793]: Session 1 logged out. Waiting for processes to exit.
Nov 29 04:39:03 np0005539482.novalocal sshd-session[7487]: Invalid user kingbase from 176.109.67.96 port 57726
Nov 29 04:39:03 np0005539482.novalocal sshd-session[7487]: Received disconnect from 176.109.67.96 port 57726:11: Bye Bye [preauth]
Nov 29 04:39:03 np0005539482.novalocal sshd-session[7487]: Disconnected from invalid user kingbase 176.109.67.96 port 57726 [preauth]
Nov 29 04:39:18 np0005539482.novalocal sshd-session[7489]: Invalid user vlad from 52.224.240.74 port 46168
Nov 29 04:39:18 np0005539482.novalocal sshd-session[7489]: Received disconnect from 52.224.240.74 port 46168:11: Bye Bye [preauth]
Nov 29 04:39:18 np0005539482.novalocal sshd-session[7489]: Disconnected from invalid user vlad 52.224.240.74 port 46168 [preauth]
Nov 29 04:39:57 np0005539482.novalocal sshd-session[7491]: Invalid user terraria from 190.0.247.85 port 43430
Nov 29 04:39:57 np0005539482.novalocal sshd-session[7491]: Received disconnect from 190.0.247.85 port 43430:11: Bye Bye [preauth]
Nov 29 04:39:57 np0005539482.novalocal sshd-session[7491]: Disconnected from invalid user terraria 190.0.247.85 port 43430 [preauth]
Nov 29 04:40:10 np0005539482.novalocal sshd-session[7493]: Invalid user dmdba from 176.109.67.96 port 38158
Nov 29 04:40:10 np0005539482.novalocal sshd-session[7493]: Received disconnect from 176.109.67.96 port 38158:11: Bye Bye [preauth]
Nov 29 04:40:10 np0005539482.novalocal sshd-session[7493]: Disconnected from invalid user dmdba 176.109.67.96 port 38158 [preauth]
Nov 29 04:41:04 np0005539482.novalocal systemd[4298]: Created slice User Background Tasks Slice.
Nov 29 04:41:04 np0005539482.novalocal systemd[4298]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 04:41:04 np0005539482.novalocal systemd[4298]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 04:41:15 np0005539482.novalocal sshd-session[7500]: Invalid user int from 190.0.247.85 port 46588
Nov 29 04:41:15 np0005539482.novalocal sshd-session[7500]: Received disconnect from 190.0.247.85 port 46588:11: Bye Bye [preauth]
Nov 29 04:41:15 np0005539482.novalocal sshd-session[7500]: Disconnected from invalid user int 190.0.247.85 port 46588 [preauth]
Nov 29 04:41:18 np0005539482.novalocal sshd-session[7502]: Invalid user work from 176.109.67.96 port 55880
Nov 29 04:41:18 np0005539482.novalocal sshd-session[7502]: Received disconnect from 176.109.67.96 port 55880:11: Bye Bye [preauth]
Nov 29 04:41:18 np0005539482.novalocal sshd-session[7502]: Disconnected from invalid user work 176.109.67.96 port 55880 [preauth]
Nov 29 04:42:28 np0005539482.novalocal sshd-session[7504]: Invalid user kiosk from 176.109.67.96 port 50030
Nov 29 04:42:28 np0005539482.novalocal sshd-session[7504]: Received disconnect from 176.109.67.96 port 50030:11: Bye Bye [preauth]
Nov 29 04:42:28 np0005539482.novalocal sshd-session[7504]: Disconnected from invalid user kiosk 176.109.67.96 port 50030 [preauth]
Nov 29 04:42:30 np0005539482.novalocal sshd-session[7506]: Received disconnect from 190.0.247.85 port 47706:11: Bye Bye [preauth]
Nov 29 04:42:30 np0005539482.novalocal sshd-session[7506]: Disconnected from authenticating user root 190.0.247.85 port 47706 [preauth]
Nov 29 04:42:41 np0005539482.novalocal sshd-session[7511]: Accepted publickey for zuul from 38.102.83.114 port 54110 ssh2: RSA SHA256:claowykt67vOzr+EIqjbzPN7v3ZYSs573uWOdaK+kuE
Nov 29 04:42:41 np0005539482.novalocal systemd-logind[793]: New session 3 of user zuul.
Nov 29 04:42:41 np0005539482.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 29 04:42:41 np0005539482.novalocal sshd-session[7511]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 04:42:41 np0005539482.novalocal sudo[7538]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrjwnonixonodgvcttotzwsmiqqkgdzv ; /usr/bin/python3'
Nov 29 04:42:41 np0005539482.novalocal sudo[7538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:41 np0005539482.novalocal python3[7540]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-9482-9f77-000000001cc4-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:42:41 np0005539482.novalocal sudo[7538]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:41 np0005539482.novalocal sudo[7567]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frnegzogrqohihtasfvhtuugspjiothf ; /usr/bin/python3'
Nov 29 04:42:41 np0005539482.novalocal sudo[7567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:41 np0005539482.novalocal python3[7569]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:42:41 np0005539482.novalocal sudo[7567]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:42 np0005539482.novalocal sudo[7593]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfbyazntcwlqtyszqbqyyhvthssipguj ; /usr/bin/python3'
Nov 29 04:42:42 np0005539482.novalocal sudo[7593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:42 np0005539482.novalocal sshd-session[7508]: Invalid user ubuntu from 101.47.141.125 port 54890
Nov 29 04:42:42 np0005539482.novalocal python3[7595]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:42:42 np0005539482.novalocal sudo[7593]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:42 np0005539482.novalocal sudo[7619]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncpubccmomvdqbaenmsfrgpfokeunxwa ; /usr/bin/python3'
Nov 29 04:42:42 np0005539482.novalocal sudo[7619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:42 np0005539482.novalocal sshd-session[7508]: Received disconnect from 101.47.141.125 port 54890:11: Bye Bye [preauth]
Nov 29 04:42:42 np0005539482.novalocal sshd-session[7508]: Disconnected from invalid user ubuntu 101.47.141.125 port 54890 [preauth]
Nov 29 04:42:42 np0005539482.novalocal python3[7621]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:42:42 np0005539482.novalocal sudo[7619]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:42 np0005539482.novalocal sudo[7645]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oafncixlzzpktbelptugfytfpaomxjoz ; /usr/bin/python3'
Nov 29 04:42:42 np0005539482.novalocal sudo[7645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:42 np0005539482.novalocal python3[7647]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:42:42 np0005539482.novalocal sudo[7645]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:43 np0005539482.novalocal sudo[7671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzjjhrybvydcgxudvfdrkyhkkprbhwko ; /usr/bin/python3'
Nov 29 04:42:43 np0005539482.novalocal sudo[7671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:43 np0005539482.novalocal python3[7673]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:42:43 np0005539482.novalocal sudo[7671]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:43 np0005539482.novalocal sudo[7749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqaqqksbhvsiqshttqkobksfabdxbeua ; /usr/bin/python3'
Nov 29 04:42:43 np0005539482.novalocal sudo[7749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:43 np0005539482.novalocal python3[7751]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:42:43 np0005539482.novalocal sudo[7749]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:43 np0005539482.novalocal sudo[7822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgnamhstpnykyzjjpwyticixvbyhrdyl ; /usr/bin/python3'
Nov 29 04:42:43 np0005539482.novalocal sudo[7822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:44 np0005539482.novalocal python3[7824]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764391363.531705-468-98046396210304/source _original_basename=tmpb0vb3svx follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:42:44 np0005539482.novalocal sudo[7822]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:44 np0005539482.novalocal sudo[7872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhpxcrjmegsshbihlblkfonrafkrbhna ; /usr/bin/python3'
Nov 29 04:42:44 np0005539482.novalocal sudo[7872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:45 np0005539482.novalocal python3[7874]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 04:42:45 np0005539482.novalocal systemd[1]: Reloading.
Nov 29 04:42:45 np0005539482.novalocal systemd-rc-local-generator[7896]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 04:42:45 np0005539482.novalocal sudo[7872]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:46 np0005539482.novalocal sudo[7928]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmcmndalsrpnfbhzybtjjlliwrvsutdc ; /usr/bin/python3'
Nov 29 04:42:46 np0005539482.novalocal sudo[7928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:46 np0005539482.novalocal python3[7930]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 04:42:46 np0005539482.novalocal sudo[7928]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:46 np0005539482.novalocal sudo[7954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypjsslfxzhmhtifpxolkvdfgkojsamzp ; /usr/bin/python3'
Nov 29 04:42:46 np0005539482.novalocal sudo[7954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:47 np0005539482.novalocal python3[7956]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:42:47 np0005539482.novalocal sudo[7954]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:47 np0005539482.novalocal sudo[7982]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxbzbmpvyxoxsnrnkffakvtnngqktrge ; /usr/bin/python3'
Nov 29 04:42:47 np0005539482.novalocal sudo[7982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:47 np0005539482.novalocal python3[7984]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:42:47 np0005539482.novalocal sudo[7982]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:47 np0005539482.novalocal sudo[8010]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfxwafinzdmqzwaoorxsovbozzzgvaay ; /usr/bin/python3'
Nov 29 04:42:47 np0005539482.novalocal sudo[8010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:47 np0005539482.novalocal python3[8012]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:42:47 np0005539482.novalocal sudo[8010]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:47 np0005539482.novalocal sudo[8038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jybkpndesexfbenkwyglnybrwtarhhya ; /usr/bin/python3'
Nov 29 04:42:47 np0005539482.novalocal sudo[8038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:47 np0005539482.novalocal python3[8040]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:42:47 np0005539482.novalocal sudo[8038]: pam_unix(sudo:session): session closed for user root
Nov 29 04:42:48 np0005539482.novalocal python3[8067]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-9482-9f77-000000001ccb-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:42:48 np0005539482.novalocal python3[8097]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 04:42:50 np0005539482.novalocal sshd-session[7514]: Connection closed by 38.102.83.114 port 54110
Nov 29 04:42:50 np0005539482.novalocal sshd-session[7511]: pam_unix(sshd:session): session closed for user zuul
Nov 29 04:42:50 np0005539482.novalocal systemd-logind[793]: Session 3 logged out. Waiting for processes to exit.
Nov 29 04:42:50 np0005539482.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 04:42:50 np0005539482.novalocal systemd[1]: session-3.scope: Consumed 3.878s CPU time.
Nov 29 04:42:50 np0005539482.novalocal systemd-logind[793]: Removed session 3.
Nov 29 04:42:52 np0005539482.novalocal sshd-session[8104]: Accepted publickey for zuul from 38.102.83.114 port 53326 ssh2: RSA SHA256:claowykt67vOzr+EIqjbzPN7v3ZYSs573uWOdaK+kuE
Nov 29 04:42:52 np0005539482.novalocal systemd-logind[793]: New session 4 of user zuul.
Nov 29 04:42:52 np0005539482.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 29 04:42:52 np0005539482.novalocal sshd-session[8104]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 04:42:52 np0005539482.novalocal sudo[8131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttmcyfxkkleiikmobncrslalxakllesf ; /usr/bin/python3'
Nov 29 04:42:52 np0005539482.novalocal sudo[8131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:42:52 np0005539482.novalocal python3[8133]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 04:43:06 np0005539482.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 04:43:06 np0005539482.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 04:43:06 np0005539482.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 04:43:06 np0005539482.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 04:43:06 np0005539482.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 04:43:06 np0005539482.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 04:43:06 np0005539482.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 04:43:06 np0005539482.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 04:43:15 np0005539482.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 04:43:15 np0005539482.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 04:43:15 np0005539482.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 04:43:15 np0005539482.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 04:43:15 np0005539482.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 04:43:15 np0005539482.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 04:43:15 np0005539482.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 04:43:15 np0005539482.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 04:43:24 np0005539482.novalocal sshd-session[8192]: Received disconnect from 80.94.93.233 port 35588:11:  [preauth]
Nov 29 04:43:24 np0005539482.novalocal sshd-session[8192]: Disconnected from authenticating user root 80.94.93.233 port 35588 [preauth]
Nov 29 04:43:24 np0005539482.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 29 04:43:24 np0005539482.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 04:43:24 np0005539482.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 04:43:24 np0005539482.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 04:43:24 np0005539482.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 04:43:24 np0005539482.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 04:43:24 np0005539482.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 04:43:24 np0005539482.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 04:43:25 np0005539482.novalocal setsebool[8202]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 04:43:25 np0005539482.novalocal setsebool[8202]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 04:43:32 np0005539482.novalocal sshd-session[8211]: banner exchange: Connection from 45.227.254.155 port 65292: invalid format
Nov 29 04:43:34 np0005539482.novalocal sshd-session[8212]: Invalid user intell from 176.109.67.96 port 33858
Nov 29 04:43:35 np0005539482.novalocal sshd-session[8212]: Received disconnect from 176.109.67.96 port 33858:11: Bye Bye [preauth]
Nov 29 04:43:35 np0005539482.novalocal sshd-session[8212]: Disconnected from invalid user intell 176.109.67.96 port 33858 [preauth]
Nov 29 04:43:37 np0005539482.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 29 04:43:37 np0005539482.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 04:43:37 np0005539482.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 29 04:43:37 np0005539482.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 04:43:37 np0005539482.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 29 04:43:37 np0005539482.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 04:43:37 np0005539482.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 04:43:37 np0005539482.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 04:43:42 np0005539482.novalocal sshd-session[8921]: Invalid user free from 190.0.247.85 port 51234
Nov 29 04:43:42 np0005539482.novalocal sshd-session[8921]: Received disconnect from 190.0.247.85 port 51234:11: Bye Bye [preauth]
Nov 29 04:43:42 np0005539482.novalocal sshd-session[8921]: Disconnected from invalid user free 190.0.247.85 port 51234 [preauth]
Nov 29 04:43:56 np0005539482.novalocal dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 04:43:56 np0005539482.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 04:43:56 np0005539482.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 29 04:43:56 np0005539482.novalocal systemd[1]: Reloading.
Nov 29 04:43:56 np0005539482.novalocal systemd-rc-local-generator[8961]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 04:43:56 np0005539482.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 04:43:58 np0005539482.novalocal sudo[8131]: pam_unix(sudo:session): session closed for user root
Nov 29 04:44:02 np0005539482.novalocal python3[13858]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-b818-508a-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:44:03 np0005539482.novalocal kernel: evm: overlay not supported
Nov 29 04:44:03 np0005539482.novalocal systemd[4298]: Starting D-Bus User Message Bus...
Nov 29 04:44:03 np0005539482.novalocal dbus-broker-launch[14092]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 04:44:03 np0005539482.novalocal dbus-broker-launch[14092]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 04:44:03 np0005539482.novalocal systemd[4298]: Started D-Bus User Message Bus.
Nov 29 04:44:03 np0005539482.novalocal dbus-broker-lau[14092]: Ready
Nov 29 04:44:03 np0005539482.novalocal systemd[4298]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 04:44:03 np0005539482.novalocal systemd[4298]: Created slice Slice /user.
Nov 29 04:44:03 np0005539482.novalocal systemd[4298]: podman-14073.scope: unit configures an IP firewall, but not running as root.
Nov 29 04:44:03 np0005539482.novalocal systemd[4298]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 04:44:03 np0005539482.novalocal systemd[4298]: Started podman-14073.scope.
Nov 29 04:44:03 np0005539482.novalocal systemd[4298]: Started podman-pause-0438675b.scope.
Nov 29 04:44:04 np0005539482.novalocal sudo[14437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdjpkjpxqslqacdkshhgsdzzbuqtzvor ; /usr/bin/python3'
Nov 29 04:44:04 np0005539482.novalocal sudo[14437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:44:04 np0005539482.novalocal python3[14447]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.30:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.30:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:44:04 np0005539482.novalocal python3[14447]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 04:44:04 np0005539482.novalocal sudo[14437]: pam_unix(sudo:session): session closed for user root
Nov 29 04:44:04 np0005539482.novalocal sshd-session[8107]: Connection closed by 38.102.83.114 port 53326
Nov 29 04:44:04 np0005539482.novalocal sshd-session[8104]: pam_unix(sshd:session): session closed for user zuul
Nov 29 04:44:04 np0005539482.novalocal systemd-logind[793]: Session 4 logged out. Waiting for processes to exit.
Nov 29 04:44:04 np0005539482.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 04:44:04 np0005539482.novalocal systemd[1]: session-4.scope: Consumed 58.854s CPU time.
Nov 29 04:44:04 np0005539482.novalocal systemd-logind[793]: Removed session 4.
Nov 29 04:44:22 np0005539482.novalocal sshd-session[21537]: Connection closed by 38.102.83.113 port 32866 [preauth]
Nov 29 04:44:22 np0005539482.novalocal sshd-session[21546]: Connection closed by 38.102.83.113 port 32876 [preauth]
Nov 29 04:44:22 np0005539482.novalocal sshd-session[21544]: Unable to negotiate with 38.102.83.113 port 32878: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 29 04:44:22 np0005539482.novalocal sshd-session[21540]: Unable to negotiate with 38.102.83.113 port 32888: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 29 04:44:22 np0005539482.novalocal sshd-session[21548]: Unable to negotiate with 38.102.83.113 port 32880: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 29 04:44:27 np0005539482.novalocal sshd-session[23476]: Accepted publickey for zuul from 38.102.83.114 port 37846 ssh2: RSA SHA256:claowykt67vOzr+EIqjbzPN7v3ZYSs573uWOdaK+kuE
Nov 29 04:44:27 np0005539482.novalocal systemd-logind[793]: New session 5 of user zuul.
Nov 29 04:44:27 np0005539482.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 29 04:44:27 np0005539482.novalocal sshd-session[23476]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 04:44:27 np0005539482.novalocal python3[23588]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOAZtOFYhQMEa5nYlDS3yTR0mwPfNdibYk5CkrJGGicpFqhJ3ZDd/9qZuUQiiYA5rEM9cOLorGiDfXnpK64Jn/o= zuul@np0005539481.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:44:27 np0005539482.novalocal sudo[23779]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqcjioyktzjppotpklmxddcsnwmjuerc ; /usr/bin/python3'
Nov 29 04:44:27 np0005539482.novalocal sudo[23779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:44:28 np0005539482.novalocal python3[23791]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOAZtOFYhQMEa5nYlDS3yTR0mwPfNdibYk5CkrJGGicpFqhJ3ZDd/9qZuUQiiYA5rEM9cOLorGiDfXnpK64Jn/o= zuul@np0005539481.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:44:28 np0005539482.novalocal sudo[23779]: pam_unix(sudo:session): session closed for user root
Nov 29 04:44:28 np0005539482.novalocal sudo[24128]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffzsglujsmyfqqacklczarxadkvdqpmy ; /usr/bin/python3'
Nov 29 04:44:28 np0005539482.novalocal sudo[24128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:44:28 np0005539482.novalocal python3[24139]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539482.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 04:44:28 np0005539482.novalocal useradd[24232]: new group: name=cloud-admin, GID=1002
Nov 29 04:44:28 np0005539482.novalocal useradd[24232]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 29 04:44:29 np0005539482.novalocal sudo[24128]: pam_unix(sudo:session): session closed for user root
Nov 29 04:44:29 np0005539482.novalocal sudo[24358]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlihpstyrplpopfppjnwczrnjtfedgil ; /usr/bin/python3'
Nov 29 04:44:29 np0005539482.novalocal sudo[24358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:44:29 np0005539482.novalocal python3[24370]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOAZtOFYhQMEa5nYlDS3yTR0mwPfNdibYk5CkrJGGicpFqhJ3ZDd/9qZuUQiiYA5rEM9cOLorGiDfXnpK64Jn/o= zuul@np0005539481.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 04:44:29 np0005539482.novalocal sudo[24358]: pam_unix(sudo:session): session closed for user root
Nov 29 04:44:29 np0005539482.novalocal sudo[24640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkhckdnmhyobcxobiqwbymefpsjolaep ; /usr/bin/python3'
Nov 29 04:44:29 np0005539482.novalocal sudo[24640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:44:29 np0005539482.novalocal python3[24650]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:44:29 np0005539482.novalocal sudo[24640]: pam_unix(sudo:session): session closed for user root
Nov 29 04:44:30 np0005539482.novalocal sudo[24909]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imcexnnnoboloalidjdripzzmasuvoiw ; /usr/bin/python3'
Nov 29 04:44:30 np0005539482.novalocal sudo[24909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:44:30 np0005539482.novalocal python3[24919]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764391469.5048835-135-234232653152013/source _original_basename=tmpsqondq3e follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:44:30 np0005539482.novalocal sudo[24909]: pam_unix(sudo:session): session closed for user root
Nov 29 04:44:30 np0005539482.novalocal sudo[25244]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwybmiumktculftlbcmcrdnzoxzbvlmv ; /usr/bin/python3'
Nov 29 04:44:30 np0005539482.novalocal sudo[25244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:44:30 np0005539482.novalocal python3[25251]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 29 04:44:31 np0005539482.novalocal systemd[1]: Starting Hostname Service...
Nov 29 04:44:31 np0005539482.novalocal systemd[1]: Started Hostname Service.
Nov 29 04:44:31 np0005539482.novalocal systemd-hostnamed[25361]: Changed pretty hostname to 'compute-0'
Nov 29 04:44:31 compute-0 systemd-hostnamed[25361]: Hostname set to <compute-0> (static)
Nov 29 04:44:31 compute-0 NetworkManager[7200]: <info>  [1764391471.0943] hostname: static hostname changed from "np0005539482.novalocal" to "compute-0"
Nov 29 04:44:31 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 04:44:31 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 04:44:31 compute-0 sudo[25244]: pam_unix(sudo:session): session closed for user root
Nov 29 04:44:31 compute-0 sshd-session[23526]: Connection closed by 38.102.83.114 port 37846
Nov 29 04:44:31 compute-0 sshd-session[23476]: pam_unix(sshd:session): session closed for user zuul
Nov 29 04:44:31 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 04:44:31 compute-0 systemd[1]: session-5.scope: Consumed 2.076s CPU time.
Nov 29 04:44:31 compute-0 systemd-logind[793]: Session 5 logged out. Waiting for processes to exit.
Nov 29 04:44:31 compute-0 systemd-logind[793]: Removed session 5.
Nov 29 04:44:40 compute-0 sshd-session[29164]: Invalid user deploy from 176.109.67.96 port 37846
Nov 29 04:44:40 compute-0 sshd-session[29164]: Received disconnect from 176.109.67.96 port 37846:11: Bye Bye [preauth]
Nov 29 04:44:40 compute-0 sshd-session[29164]: Disconnected from invalid user deploy 176.109.67.96 port 37846 [preauth]
Nov 29 04:44:41 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 04:44:41 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 04:44:41 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 04:44:41 compute-0 systemd[1]: man-db-cache-update.service: Consumed 55.108s CPU time.
Nov 29 04:44:41 compute-0 systemd[1]: run-r6d2abc60798b4867af5b1b4e8f1b42bc.service: Deactivated successfully.
Nov 29 04:44:52 compute-0 sshd-session[29957]: Invalid user superadmin from 190.0.247.85 port 45524
Nov 29 04:44:52 compute-0 sshd-session[29957]: Received disconnect from 190.0.247.85 port 45524:11: Bye Bye [preauth]
Nov 29 04:44:52 compute-0 sshd-session[29957]: Disconnected from invalid user superadmin 190.0.247.85 port 45524 [preauth]
Nov 29 04:45:01 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 04:45:04 compute-0 sshd-session[29961]: error: kex_exchange_identification: read: Connection reset by peer
Nov 29 04:45:04 compute-0 sshd-session[29961]: Connection reset by 101.47.141.125 port 48778
Nov 29 04:45:47 compute-0 sshd-session[29962]: Invalid user pzuser from 176.109.67.96 port 59888
Nov 29 04:45:47 compute-0 sshd-session[29962]: Received disconnect from 176.109.67.96 port 59888:11: Bye Bye [preauth]
Nov 29 04:45:47 compute-0 sshd-session[29962]: Disconnected from invalid user pzuser 176.109.67.96 port 59888 [preauth]
Nov 29 04:46:04 compute-0 sshd-session[29967]: Invalid user postgres from 190.0.247.85 port 60152
Nov 29 04:46:04 compute-0 sshd-session[29967]: Received disconnect from 190.0.247.85 port 60152:11: Bye Bye [preauth]
Nov 29 04:46:04 compute-0 sshd-session[29967]: Disconnected from invalid user postgres 190.0.247.85 port 60152 [preauth]
Nov 29 04:46:56 compute-0 sshd-session[29969]: Invalid user superadmin from 176.109.67.96 port 44048
Nov 29 04:46:56 compute-0 sshd-session[29969]: Received disconnect from 176.109.67.96 port 44048:11: Bye Bye [preauth]
Nov 29 04:46:56 compute-0 sshd-session[29969]: Disconnected from invalid user superadmin 176.109.67.96 port 44048 [preauth]
Nov 29 04:47:20 compute-0 sshd-session[29971]: Invalid user g from 190.0.247.85 port 38758
Nov 29 04:47:21 compute-0 sshd-session[29971]: Received disconnect from 190.0.247.85 port 38758:11: Bye Bye [preauth]
Nov 29 04:47:21 compute-0 sshd-session[29971]: Disconnected from invalid user g 190.0.247.85 port 38758 [preauth]
Nov 29 04:48:03 compute-0 sshd-session[29974]: Accepted publickey for zuul from 38.102.83.113 port 47734 ssh2: RSA SHA256:claowykt67vOzr+EIqjbzPN7v3ZYSs573uWOdaK+kuE
Nov 29 04:48:03 compute-0 systemd-logind[793]: New session 6 of user zuul.
Nov 29 04:48:03 compute-0 systemd[1]: Started Session 6 of User zuul.
Nov 29 04:48:03 compute-0 sshd-session[29974]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 04:48:04 compute-0 python3[30050]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 04:48:05 compute-0 sudo[30164]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfqvecbkhgcawlulycqqbqbqfscckykm ; /usr/bin/python3'
Nov 29 04:48:05 compute-0 sudo[30164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:05 compute-0 python3[30166]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:48:05 compute-0 sudo[30164]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:06 compute-0 sudo[30237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aewivvutvmdniwdcknfqfbxcoedjkzhc ; /usr/bin/python3'
Nov 29 04:48:06 compute-0 sudo[30237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:06 compute-0 python3[30239]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:48:06 compute-0 sudo[30237]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:06 compute-0 sudo[30263]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inshzibgzrfdyvnptiaeadmiueoohwvi ; /usr/bin/python3'
Nov 29 04:48:06 compute-0 sudo[30263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:06 compute-0 python3[30265]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:48:06 compute-0 sudo[30263]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:06 compute-0 sudo[30336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcamkdwzpliegiqpajonvjizfjqazchw ; /usr/bin/python3'
Nov 29 04:48:06 compute-0 sudo[30336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:06 compute-0 python3[30338]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:48:06 compute-0 sudo[30336]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:06 compute-0 sudo[30362]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqdjgqfqestsvbcomwongbyouavvrvmd ; /usr/bin/python3'
Nov 29 04:48:06 compute-0 sudo[30362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:07 compute-0 python3[30364]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:48:07 compute-0 sudo[30362]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:07 compute-0 sudo[30435]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpwfoczscezjsccnliqbmhqbvuijtmsn ; /usr/bin/python3'
Nov 29 04:48:07 compute-0 sudo[30435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:07 compute-0 python3[30437]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:48:07 compute-0 sudo[30435]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:07 compute-0 sudo[30461]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iceonzagmxpwsofjbfqrwnxctvtieflb ; /usr/bin/python3'
Nov 29 04:48:07 compute-0 sudo[30461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:07 compute-0 python3[30463]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:48:07 compute-0 sudo[30461]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:07 compute-0 sudo[30534]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkiqykaskcrujhrtqhyxnlwkliyydkdy ; /usr/bin/python3'
Nov 29 04:48:07 compute-0 sudo[30534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:07 compute-0 python3[30536]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:48:07 compute-0 sudo[30534]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:08 compute-0 sudo[30560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zebhucmybpbydpgufhegqayyfnizstmg ; /usr/bin/python3'
Nov 29 04:48:08 compute-0 sudo[30560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:08 compute-0 python3[30562]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:48:08 compute-0 sudo[30560]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:08 compute-0 sudo[30633]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vikmaxbmllkbpgglvffnxpnaxigfffih ; /usr/bin/python3'
Nov 29 04:48:08 compute-0 sudo[30633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:08 compute-0 python3[30635]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:48:08 compute-0 sudo[30633]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:08 compute-0 sudo[30659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkaumubtfljfaeuzmrlpijfgnyhjkwve ; /usr/bin/python3'
Nov 29 04:48:08 compute-0 sudo[30659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:08 compute-0 python3[30661]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:48:08 compute-0 sudo[30659]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:09 compute-0 sudo[30732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ithznxnanwikpfhdlferswfdgffdqdte ; /usr/bin/python3'
Nov 29 04:48:09 compute-0 sudo[30732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:09 compute-0 python3[30734]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:48:09 compute-0 sudo[30732]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:09 compute-0 sudo[30760]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imzjoapiljzxqyiqzaassvlxbkaggvyf ; /usr/bin/python3'
Nov 29 04:48:09 compute-0 sudo[30760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:09 compute-0 python3[30762]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 04:48:09 compute-0 sudo[30760]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:09 compute-0 sudo[30833]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zypwzqzmqifiexvdssopshmcuwdvpexr ; /usr/bin/python3'
Nov 29 04:48:09 compute-0 sudo[30833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:48:09 compute-0 python3[30835]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764391685.5083816-33534-100409545444766/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 04:48:09 compute-0 sudo[30833]: pam_unix(sudo:session): session closed for user root
Nov 29 04:48:09 compute-0 sshd-session[30735]: Invalid user astra from 176.109.67.96 port 38698
Nov 29 04:48:10 compute-0 sshd-session[30735]: Received disconnect from 176.109.67.96 port 38698:11: Bye Bye [preauth]
Nov 29 04:48:10 compute-0 sshd-session[30735]: Disconnected from invalid user astra 176.109.67.96 port 38698 [preauth]
Nov 29 04:48:12 compute-0 sshd-session[30861]: Connection closed by 192.168.122.11 port 42660 [preauth]
Nov 29 04:48:12 compute-0 sshd-session[30860]: Connection closed by 192.168.122.11 port 42652 [preauth]
Nov 29 04:48:12 compute-0 sshd-session[30862]: Unable to negotiate with 192.168.122.11 port 42674: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 29 04:48:12 compute-0 sshd-session[30863]: Unable to negotiate with 192.168.122.11 port 42684: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 29 04:48:12 compute-0 sshd-session[30864]: Unable to negotiate with 192.168.122.11 port 42692: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 29 04:48:20 compute-0 python3[30893]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:48:33 compute-0 sshd-session[30895]: Invalid user oracle from 190.0.247.85 port 43918
Nov 29 04:48:33 compute-0 sshd-session[30895]: Received disconnect from 190.0.247.85 port 43918:11: Bye Bye [preauth]
Nov 29 04:48:33 compute-0 sshd-session[30895]: Disconnected from invalid user oracle 190.0.247.85 port 43918 [preauth]
Nov 29 04:49:15 compute-0 sshd-session[30897]: Invalid user admin from 176.109.67.96 port 54950
Nov 29 04:49:15 compute-0 sshd-session[30897]: Received disconnect from 176.109.67.96 port 54950:11: Bye Bye [preauth]
Nov 29 04:49:15 compute-0 sshd-session[30897]: Disconnected from invalid user admin 176.109.67.96 port 54950 [preauth]
Nov 29 04:49:45 compute-0 sshd-session[30901]: Invalid user kingbase from 190.0.247.85 port 48914
Nov 29 04:49:45 compute-0 sshd-session[30901]: Received disconnect from 190.0.247.85 port 48914:11: Bye Bye [preauth]
Nov 29 04:49:45 compute-0 sshd-session[30901]: Disconnected from invalid user kingbase 190.0.247.85 port 48914 [preauth]
Nov 29 04:49:49 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 04:49:49 compute-0 sshd-session[30899]: Invalid user kiosk from 101.47.141.125 port 33236
Nov 29 04:49:49 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 04:49:49 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 04:49:49 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 04:49:49 compute-0 sshd-session[30899]: Received disconnect from 101.47.141.125 port 33236:11: Bye Bye [preauth]
Nov 29 04:49:49 compute-0 sshd-session[30899]: Disconnected from invalid user kiosk 101.47.141.125 port 33236 [preauth]
Nov 29 04:50:18 compute-0 sshd-session[30907]: Invalid user root2 from 176.109.67.96 port 54292
Nov 29 04:50:19 compute-0 sshd-session[30907]: Received disconnect from 176.109.67.96 port 54292:11: Bye Bye [preauth]
Nov 29 04:50:19 compute-0 sshd-session[30907]: Disconnected from invalid user root2 176.109.67.96 port 54292 [preauth]
Nov 29 04:50:54 compute-0 sshd-session[30909]: Invalid user deploy from 190.0.247.85 port 35940
Nov 29 04:50:55 compute-0 sshd-session[30909]: Received disconnect from 190.0.247.85 port 35940:11: Bye Bye [preauth]
Nov 29 04:50:55 compute-0 sshd-session[30909]: Disconnected from invalid user deploy 190.0.247.85 port 35940 [preauth]
Nov 29 04:51:24 compute-0 sshd-session[30912]: Invalid user gns3 from 176.109.67.96 port 56246
Nov 29 04:51:25 compute-0 sshd-session[30912]: Received disconnect from 176.109.67.96 port 56246:11: Bye Bye [preauth]
Nov 29 04:51:25 compute-0 sshd-session[30912]: Disconnected from invalid user gns3 176.109.67.96 port 56246 [preauth]
Nov 29 04:51:38 compute-0 sshd-session[30914]: Connection closed by authenticating user root 141.94.154.244 port 48518 [preauth]
Nov 29 04:52:06 compute-0 sshd-session[30916]: Invalid user admin from 190.0.247.85 port 32910
Nov 29 04:52:06 compute-0 sshd-session[30916]: Received disconnect from 190.0.247.85 port 32910:11: Bye Bye [preauth]
Nov 29 04:52:06 compute-0 sshd-session[30916]: Disconnected from invalid user admin 190.0.247.85 port 32910 [preauth]
Nov 29 04:52:16 compute-0 sshd-session[30918]: Connection closed by 61.240.213.113 port 54466
Nov 29 04:52:31 compute-0 sshd-session[30919]: Invalid user ubuntu from 176.109.67.96 port 58740
Nov 29 04:52:31 compute-0 sshd-session[30919]: Received disconnect from 176.109.67.96 port 58740:11: Bye Bye [preauth]
Nov 29 04:52:31 compute-0 sshd-session[30919]: Disconnected from invalid user ubuntu 176.109.67.96 port 58740 [preauth]
Nov 29 04:53:20 compute-0 sshd-session[29977]: Received disconnect from 38.102.83.113 port 47734:11: disconnected by user
Nov 29 04:53:20 compute-0 sshd-session[29977]: Disconnected from user zuul 38.102.83.113 port 47734
Nov 29 04:53:20 compute-0 sshd-session[29974]: pam_unix(sshd:session): session closed for user zuul
Nov 29 04:53:20 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 04:53:20 compute-0 systemd[1]: session-6.scope: Consumed 4.870s CPU time.
Nov 29 04:53:20 compute-0 systemd-logind[793]: Session 6 logged out. Waiting for processes to exit.
Nov 29 04:53:20 compute-0 systemd-logind[793]: Removed session 6.
Nov 29 04:53:23 compute-0 sshd-session[30921]: Invalid user root2 from 190.0.247.85 port 39534
Nov 29 04:53:23 compute-0 sshd-session[30921]: Received disconnect from 190.0.247.85 port 39534:11: Bye Bye [preauth]
Nov 29 04:53:23 compute-0 sshd-session[30921]: Disconnected from invalid user root2 190.0.247.85 port 39534 [preauth]
Nov 29 04:53:40 compute-0 sshd-session[30923]: Invalid user free from 176.109.67.96 port 40046
Nov 29 04:53:41 compute-0 sshd-session[30923]: Received disconnect from 176.109.67.96 port 40046:11: Bye Bye [preauth]
Nov 29 04:53:41 compute-0 sshd-session[30923]: Disconnected from invalid user free 176.109.67.96 port 40046 [preauth]
Nov 29 04:54:26 compute-0 sshd-session[30925]: Connection closed by 101.47.141.125 port 41802 [preauth]
Nov 29 04:54:40 compute-0 sshd-session[30927]: Invalid user gns3 from 190.0.247.85 port 42788
Nov 29 04:54:40 compute-0 sshd-session[30927]: Received disconnect from 190.0.247.85 port 42788:11: Bye Bye [preauth]
Nov 29 04:54:40 compute-0 sshd-session[30927]: Disconnected from invalid user gns3 190.0.247.85 port 42788 [preauth]
Nov 29 04:54:52 compute-0 sshd-session[30929]: Invalid user int from 176.109.67.96 port 49712
Nov 29 04:54:52 compute-0 sshd-session[30929]: Received disconnect from 176.109.67.96 port 49712:11: Bye Bye [preauth]
Nov 29 04:54:52 compute-0 sshd-session[30929]: Disconnected from invalid user int 176.109.67.96 port 49712 [preauth]
Nov 29 04:55:53 compute-0 sshd-session[30933]: Invalid user kiosk from 190.0.247.85 port 36778
Nov 29 04:55:54 compute-0 sshd-session[30933]: Received disconnect from 190.0.247.85 port 36778:11: Bye Bye [preauth]
Nov 29 04:55:54 compute-0 sshd-session[30933]: Disconnected from invalid user kiosk 190.0.247.85 port 36778 [preauth]
Nov 29 04:55:59 compute-0 sshd-session[30935]: Invalid user vpnuser from 176.109.67.96 port 53900
Nov 29 04:55:59 compute-0 sshd-session[30935]: Received disconnect from 176.109.67.96 port 53900:11: Bye Bye [preauth]
Nov 29 04:55:59 compute-0 sshd-session[30935]: Disconnected from invalid user vpnuser 176.109.67.96 port 53900 [preauth]
Nov 29 04:56:02 compute-0 sshd-session[30937]: Invalid user support from 78.128.112.74 port 34432
Nov 29 04:56:02 compute-0 sshd-session[30937]: Connection closed by invalid user support 78.128.112.74 port 34432 [preauth]
Nov 29 04:56:33 compute-0 sshd-session[30939]: Received disconnect from 61.240.213.113 port 51580:11:  [preauth]
Nov 29 04:56:33 compute-0 sshd-session[30939]: Disconnected from authenticating user root 61.240.213.113 port 51580 [preauth]
Nov 29 04:57:04 compute-0 sshd-session[30944]: Invalid user postgres from 176.109.67.96 port 48980
Nov 29 04:57:04 compute-0 sshd-session[30944]: Received disconnect from 176.109.67.96 port 48980:11: Bye Bye [preauth]
Nov 29 04:57:04 compute-0 sshd-session[30944]: Disconnected from invalid user postgres 176.109.67.96 port 48980 [preauth]
Nov 29 04:57:04 compute-0 sshd-session[30946]: Invalid user pzuser from 190.0.247.85 port 44186
Nov 29 04:57:05 compute-0 sshd-session[30946]: Received disconnect from 190.0.247.85 port 44186:11: Bye Bye [preauth]
Nov 29 04:57:05 compute-0 sshd-session[30946]: Disconnected from invalid user pzuser 190.0.247.85 port 44186 [preauth]
Nov 29 04:57:48 compute-0 sshd-session[30948]: Received disconnect from 80.94.93.233 port 51098:11:  [preauth]
Nov 29 04:57:48 compute-0 sshd-session[30948]: Disconnected from authenticating user root 80.94.93.233 port 51098 [preauth]
Nov 29 04:58:09 compute-0 sshd-session[30950]: Invalid user oracle from 176.109.67.96 port 42626
Nov 29 04:58:09 compute-0 sshd-session[30950]: Received disconnect from 176.109.67.96 port 42626:11: Bye Bye [preauth]
Nov 29 04:58:09 compute-0 sshd-session[30950]: Disconnected from invalid user oracle 176.109.67.96 port 42626 [preauth]
Nov 29 04:58:16 compute-0 sshd-session[30952]: Invalid user ubuntu from 190.0.247.85 port 46974
Nov 29 04:58:16 compute-0 sshd-session[30952]: Received disconnect from 190.0.247.85 port 46974:11: Bye Bye [preauth]
Nov 29 04:58:16 compute-0 sshd-session[30952]: Disconnected from invalid user ubuntu 190.0.247.85 port 46974 [preauth]
Nov 29 04:58:39 compute-0 sshd[1004]: Timeout before authentication for connection from 101.47.141.125 to 38.102.83.17, pid = 30941
Nov 29 04:58:57 compute-0 sshd[1004]: drop connection #0 from [101.47.141.125]:40212 on [38.102.83.17]:22 penalty: exceeded LoginGraceTime
Nov 29 04:59:18 compute-0 sshd-session[30954]: Received disconnect from 176.109.67.96 port 37428:11: Bye Bye [preauth]
Nov 29 04:59:18 compute-0 sshd-session[30954]: Disconnected from authenticating user root 176.109.67.96 port 37428 [preauth]
Nov 29 04:59:29 compute-0 sshd-session[30956]: Invalid user cgpexpert from 190.0.247.85 port 39236
Nov 29 04:59:29 compute-0 sshd-session[30956]: Received disconnect from 190.0.247.85 port 39236:11: Bye Bye [preauth]
Nov 29 04:59:29 compute-0 sshd-session[30956]: Disconnected from invalid user cgpexpert 190.0.247.85 port 39236 [preauth]
Nov 29 04:59:30 compute-0 sshd-session[30958]: Accepted publickey for zuul from 192.168.122.30 port 33178 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 04:59:30 compute-0 systemd-logind[793]: New session 7 of user zuul.
Nov 29 04:59:30 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 29 04:59:30 compute-0 sshd-session[30958]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 04:59:31 compute-0 python3.9[31111]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 04:59:33 compute-0 sudo[31293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orejvgchnzzywasifipfjfhplzfzggnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392372.937549-32-193785382654102/AnsiballZ_command.py'
Nov 29 04:59:33 compute-0 sudo[31293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 04:59:34 compute-0 python3.9[31295]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 04:59:41 compute-0 sudo[31293]: pam_unix(sudo:session): session closed for user root
Nov 29 04:59:42 compute-0 sshd-session[30961]: Connection closed by 192.168.122.30 port 33178
Nov 29 04:59:42 compute-0 sshd-session[30958]: pam_unix(sshd:session): session closed for user zuul
Nov 29 04:59:42 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 04:59:42 compute-0 systemd[1]: session-7.scope: Consumed 8.106s CPU time.
Nov 29 04:59:42 compute-0 systemd-logind[793]: Session 7 logged out. Waiting for processes to exit.
Nov 29 04:59:42 compute-0 systemd-logind[793]: Removed session 7.
Nov 29 04:59:57 compute-0 sshd-session[31352]: Accepted publickey for zuul from 192.168.122.30 port 42532 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 04:59:57 compute-0 systemd-logind[793]: New session 8 of user zuul.
Nov 29 04:59:57 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 29 04:59:57 compute-0 sshd-session[31352]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 04:59:58 compute-0 python3.9[31505]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 04:59:59 compute-0 python3.9[31679]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:00:00 compute-0 sudo[31830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asdmrwkukgewdfynkownarewsqkljvph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392400.2494922-45-88550921920439/AnsiballZ_command.py'
Nov 29 05:00:00 compute-0 sudo[31830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:00:00 compute-0 python3.9[31832]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:00:00 compute-0 sudo[31830]: pam_unix(sudo:session): session closed for user root
Nov 29 05:00:01 compute-0 sudo[31983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzctgmsrzhvmfxwyrnqvyhawslwercrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392401.223497-57-110764559169625/AnsiballZ_stat.py'
Nov 29 05:00:01 compute-0 sudo[31983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:00:01 compute-0 python3.9[31985]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:00:01 compute-0 sudo[31983]: pam_unix(sudo:session): session closed for user root
Nov 29 05:00:02 compute-0 sudo[32135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtchhiszunklvqfyrvskjotycdyrqvux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392402.0497248-65-171532202048303/AnsiballZ_file.py'
Nov 29 05:00:02 compute-0 sudo[32135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:00:02 compute-0 python3.9[32137]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:00:02 compute-0 sudo[32135]: pam_unix(sudo:session): session closed for user root
Nov 29 05:00:03 compute-0 sudo[32287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isnhguonppdbzvyycikibitirpcobsup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392402.866403-73-66379025660430/AnsiballZ_stat.py'
Nov 29 05:00:03 compute-0 sudo[32287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:00:03 compute-0 python3.9[32289]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:00:03 compute-0 sudo[32287]: pam_unix(sudo:session): session closed for user root
Nov 29 05:00:03 compute-0 sudo[32410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhjzfhinvnqpawdttjvwgaxdvtnyrvcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392402.866403-73-66379025660430/AnsiballZ_copy.py'
Nov 29 05:00:03 compute-0 sudo[32410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:00:04 compute-0 python3.9[32412]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392402.866403-73-66379025660430/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:00:04 compute-0 sudo[32410]: pam_unix(sudo:session): session closed for user root
Nov 29 05:00:04 compute-0 sudo[32562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdjmyctulgbqtaauidixlamburhqsjlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392404.3229399-88-104367671869020/AnsiballZ_setup.py'
Nov 29 05:00:04 compute-0 sudo[32562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:00:04 compute-0 python3.9[32564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:00:05 compute-0 sudo[32562]: pam_unix(sudo:session): session closed for user root
Nov 29 05:00:05 compute-0 sudo[32718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jliwkggnfspsomtcmmjffkdyfbprdoqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392405.304361-96-18234271512597/AnsiballZ_file.py'
Nov 29 05:00:05 compute-0 sudo[32718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:00:05 compute-0 python3.9[32720]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:00:05 compute-0 sudo[32718]: pam_unix(sudo:session): session closed for user root
Nov 29 05:00:06 compute-0 sudo[32870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvnedpkywjeevvbhxgfhzjxrirrkptfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392405.9854794-105-80729506808710/AnsiballZ_file.py'
Nov 29 05:00:06 compute-0 sudo[32870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:00:06 compute-0 python3.9[32872]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:00:06 compute-0 sudo[32870]: pam_unix(sudo:session): session closed for user root
Nov 29 05:00:07 compute-0 python3.9[33022]: ansible-ansible.builtin.service_facts Invoked
Nov 29 05:00:12 compute-0 python3.9[33275]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:00:12 compute-0 python3.9[33425]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:00:14 compute-0 python3.9[33579]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:00:15 compute-0 sudo[33735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izdzxlmuhkaaikfrcsfkmcjrjmyekcta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392414.7389078-153-130236077973878/AnsiballZ_setup.py'
Nov 29 05:00:15 compute-0 sudo[33735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:00:15 compute-0 python3.9[33737]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:00:15 compute-0 sudo[33735]: pam_unix(sudo:session): session closed for user root
Nov 29 05:00:16 compute-0 sudo[33819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vokunlkymabxrchmgebrxnckfskqlwef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392414.7389078-153-130236077973878/AnsiballZ_dnf.py'
Nov 29 05:00:16 compute-0 sudo[33819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:00:16 compute-0 python3.9[33821]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:00:42 compute-0 sshd-session[33959]: Invalid user opc from 190.0.247.85 port 55596
Nov 29 05:00:42 compute-0 sshd-session[33959]: Received disconnect from 190.0.247.85 port 55596:11: Bye Bye [preauth]
Nov 29 05:00:42 compute-0 sshd-session[33959]: Disconnected from invalid user opc 190.0.247.85 port 55596 [preauth]
Nov 29 05:00:58 compute-0 systemd[1]: Reloading.
Nov 29 05:00:58 compute-0 systemd-rc-local-generator[34020]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:00:59 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 05:00:59 compute-0 systemd[1]: Reloading.
Nov 29 05:00:59 compute-0 systemd-rc-local-generator[34054]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:00:59 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 05:00:59 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 05:00:59 compute-0 systemd[1]: Reloading.
Nov 29 05:00:59 compute-0 systemd-rc-local-generator[34098]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:00:59 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 05:01:00 compute-0 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 05:01:00 compute-0 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 05:01:00 compute-0 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 05:01:01 compute-0 CROND[34122]: (root) CMD (run-parts /etc/cron.hourly)
Nov 29 05:01:01 compute-0 run-parts[34125]: (/etc/cron.hourly) starting 0anacron
Nov 29 05:01:01 compute-0 anacron[34133]: Anacron started on 2025-11-29
Nov 29 05:01:01 compute-0 anacron[34133]: Will run job `cron.daily' in 26 min.
Nov 29 05:01:01 compute-0 anacron[34133]: Will run job `cron.weekly' in 46 min.
Nov 29 05:01:01 compute-0 anacron[34133]: Will run job `cron.monthly' in 66 min.
Nov 29 05:01:01 compute-0 anacron[34133]: Jobs will be executed sequentially
Nov 29 05:01:01 compute-0 run-parts[34135]: (/etc/cron.hourly) finished 0anacron
Nov 29 05:01:01 compute-0 CROND[34121]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 29 05:01:22 compute-0 sshd-session[34200]: Connection closed by 101.47.141.125 port 42450 [preauth]
Nov 29 05:01:56 compute-0 sshd-session[34313]: Invalid user dmdba from 190.0.247.85 port 53650
Nov 29 05:01:56 compute-0 sshd-session[34313]: Received disconnect from 190.0.247.85 port 53650:11: Bye Bye [preauth]
Nov 29 05:01:56 compute-0 sshd-session[34313]: Disconnected from invalid user dmdba 190.0.247.85 port 53650 [preauth]
Nov 29 05:02:01 compute-0 kernel: SELinux:  Converting 2717 SID table entries...
Nov 29 05:02:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 05:02:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 05:02:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 05:02:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 05:02:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 05:02:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 05:02:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 05:02:02 compute-0 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 05:02:02 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 05:02:02 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 05:02:02 compute-0 systemd[1]: Reloading.
Nov 29 05:02:02 compute-0 systemd-rc-local-generator[34430]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:02:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 05:02:02 compute-0 sudo[33819]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 05:02:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 05:02:03 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.032s CPU time.
Nov 29 05:02:03 compute-0 systemd[1]: run-r9dd143f0c8464a9e84cfa4542bc1d09a.service: Deactivated successfully.
Nov 29 05:02:03 compute-0 sudo[35337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abifupainxeszqukzmhhzlxybtxmutcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392523.0896888-165-94733581355987/AnsiballZ_command.py'
Nov 29 05:02:03 compute-0 sudo[35337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:03 compute-0 python3.9[35339]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:02:04 compute-0 sudo[35337]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:05 compute-0 sudo[35618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrzbpoebdlilipgeqszwsncqfmqrnpoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392524.4857357-173-160084170745681/AnsiballZ_selinux.py'
Nov 29 05:02:05 compute-0 sudo[35618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:05 compute-0 python3.9[35620]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 05:02:05 compute-0 sudo[35618]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:06 compute-0 sudo[35770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzeerwqydwetllpfvosnhduastkmcrux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392525.7894847-184-125207843007788/AnsiballZ_command.py'
Nov 29 05:02:06 compute-0 sudo[35770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:06 compute-0 python3.9[35772]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 05:02:07 compute-0 sudo[35770]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:07 compute-0 sudo[35923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoesbdpklkhvhnljorxgicwuubtzpngb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392527.3441842-192-7479554805562/AnsiballZ_file.py'
Nov 29 05:02:07 compute-0 sudo[35923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:09 compute-0 python3.9[35925]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:02:09 compute-0 sudo[35923]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:09 compute-0 sudo[36076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znwuxwqoloefchfqdmcpihlrbrmwzops ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392529.354516-200-74066142578617/AnsiballZ_mount.py'
Nov 29 05:02:09 compute-0 sudo[36076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:10 compute-0 python3.9[36078]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 05:02:10 compute-0 sudo[36076]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:11 compute-0 sudo[36228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvxxgunymidznrcvitqwdsygrcyewuyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392530.8269866-228-69751336047737/AnsiballZ_file.py'
Nov 29 05:02:11 compute-0 sudo[36228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:11 compute-0 python3.9[36230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:02:11 compute-0 sudo[36228]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:11 compute-0 sudo[36380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcorxkdurkhuwpcivfocmqvgnxygjafn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392531.649347-236-86836183709546/AnsiballZ_stat.py'
Nov 29 05:02:11 compute-0 sudo[36380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:12 compute-0 python3.9[36382]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:02:12 compute-0 sudo[36380]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:12 compute-0 sudo[36504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ansouaxtbeycwmpqriwkmfrorzluxxwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392531.649347-236-86836183709546/AnsiballZ_copy.py'
Nov 29 05:02:12 compute-0 sudo[36504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:15 compute-0 python3.9[36506]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392531.649347-236-86836183709546/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:02:15 compute-0 sudo[36504]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:16 compute-0 sudo[36656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwnwauspluacjwxokpcmljcnxochtmed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392535.7449448-260-167608944266337/AnsiballZ_stat.py'
Nov 29 05:02:16 compute-0 sudo[36656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:16 compute-0 python3.9[36658]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:02:16 compute-0 sudo[36656]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:17 compute-0 sudo[36808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edakfpzpjpbbmrqesrgzldaocvikpmim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392537.164193-268-222641368680691/AnsiballZ_command.py'
Nov 29 05:02:17 compute-0 sudo[36808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:17 compute-0 python3.9[36810]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:02:17 compute-0 sudo[36808]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:18 compute-0 sudo[36961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcimduvnxqyylbuqnqvrzbkavznghgvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392537.99664-276-87020779443122/AnsiballZ_file.py'
Nov 29 05:02:18 compute-0 sudo[36961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:18 compute-0 python3.9[36963]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:02:18 compute-0 sudo[36961]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:19 compute-0 sudo[37113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjcvxjeevluntcfnbmpdqclznjxnarve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392538.9587579-287-146667375129810/AnsiballZ_getent.py'
Nov 29 05:02:19 compute-0 sudo[37113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:19 compute-0 python3.9[37115]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 05:02:19 compute-0 sudo[37113]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:19 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:02:19 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:02:20 compute-0 sudo[37267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmufwnjdofpbyqgqrppzdglhmtsrlpxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392539.7504098-295-16258894106404/AnsiballZ_group.py'
Nov 29 05:02:20 compute-0 sudo[37267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:20 compute-0 python3.9[37269]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 05:02:20 compute-0 groupadd[37270]: group added to /etc/group: name=qemu, GID=107
Nov 29 05:02:20 compute-0 groupadd[37270]: group added to /etc/gshadow: name=qemu
Nov 29 05:02:20 compute-0 groupadd[37270]: new group: name=qemu, GID=107
Nov 29 05:02:20 compute-0 sudo[37267]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:21 compute-0 sudo[37425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcrkyztlitlhlylwccnhalfhofsndhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392540.7053025-303-199351704684733/AnsiballZ_user.py'
Nov 29 05:02:21 compute-0 sudo[37425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:21 compute-0 python3.9[37427]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 05:02:21 compute-0 useradd[37429]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 05:02:21 compute-0 sudo[37425]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:22 compute-0 sudo[37585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqwpdmesjgkmceluganvkswhgsognfoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392541.749843-311-260239180488072/AnsiballZ_getent.py'
Nov 29 05:02:22 compute-0 sudo[37585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:22 compute-0 python3.9[37587]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 05:02:22 compute-0 sudo[37585]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:22 compute-0 sudo[37738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkopwtgaiubvwechlzwwyuofukkmapbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392542.5202918-319-268349512909133/AnsiballZ_group.py'
Nov 29 05:02:22 compute-0 sudo[37738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:23 compute-0 python3.9[37740]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 05:02:23 compute-0 groupadd[37741]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 29 05:02:23 compute-0 groupadd[37741]: group added to /etc/gshadow: name=hugetlbfs
Nov 29 05:02:23 compute-0 groupadd[37741]: new group: name=hugetlbfs, GID=42477
Nov 29 05:02:23 compute-0 sudo[37738]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:23 compute-0 sudo[37896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awaufkyjjdgwbyftghictzamqdphynjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392543.3186474-328-217654119358764/AnsiballZ_file.py'
Nov 29 05:02:23 compute-0 sudo[37896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:23 compute-0 python3.9[37898]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 05:02:23 compute-0 sudo[37896]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:24 compute-0 sudo[38048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbrxarjasdaegiiguvgbrpcxzsxeskby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392544.1069307-339-122878778396614/AnsiballZ_dnf.py'
Nov 29 05:02:24 compute-0 sudo[38048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:24 compute-0 python3.9[38050]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:02:26 compute-0 sudo[38048]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:26 compute-0 sudo[38201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdiattadvasvxfnmoulkeldhfvdilvbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392546.253574-347-173732478271946/AnsiballZ_file.py'
Nov 29 05:02:26 compute-0 sudo[38201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:26 compute-0 python3.9[38203]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:02:26 compute-0 sudo[38201]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:27 compute-0 sudo[38353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrldjjammwptpwwamzaighrglpufzlmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392546.9201367-355-57568386300701/AnsiballZ_stat.py'
Nov 29 05:02:27 compute-0 sudo[38353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:27 compute-0 python3.9[38355]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:02:27 compute-0 sudo[38353]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:27 compute-0 sudo[38476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuxwrniicpdffwwolktgbedmbutiqllg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392546.9201367-355-57568386300701/AnsiballZ_copy.py'
Nov 29 05:02:27 compute-0 sudo[38476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:27 compute-0 python3.9[38478]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764392546.9201367-355-57568386300701/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:02:28 compute-0 sudo[38476]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:28 compute-0 sudo[38628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwwdqrnysbibatztukogafxurkrqcsis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392548.222466-370-199014184887094/AnsiballZ_systemd.py'
Nov 29 05:02:28 compute-0 sudo[38628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:29 compute-0 python3.9[38630]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:02:29 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 05:02:29 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 05:02:29 compute-0 kernel: Bridge firewalling registered
Nov 29 05:02:29 compute-0 systemd-modules-load[38634]: Inserted module 'br_netfilter'
Nov 29 05:02:29 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 05:02:29 compute-0 sudo[38628]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:29 compute-0 sudo[38787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viwpyjbfouskmlxyfzheobgxyrcrsgre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392549.5754542-378-229485903586267/AnsiballZ_stat.py'
Nov 29 05:02:29 compute-0 sudo[38787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:30 compute-0 python3.9[38789]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:02:30 compute-0 sudo[38787]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:30 compute-0 sudo[38910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqwsfupjaynfhflihitsifrgqryeiwhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392549.5754542-378-229485903586267/AnsiballZ_copy.py'
Nov 29 05:02:30 compute-0 sudo[38910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:30 compute-0 python3.9[38912]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764392549.5754542-378-229485903586267/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:02:30 compute-0 sudo[38910]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:31 compute-0 sudo[39062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bspqvjhqonqwqxuuenqvojilkzmgjiem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392551.0046082-396-203611985870835/AnsiballZ_dnf.py'
Nov 29 05:02:31 compute-0 sudo[39062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:31 compute-0 python3.9[39064]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:02:34 compute-0 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 05:02:34 compute-0 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 05:02:35 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 05:02:35 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 05:02:35 compute-0 systemd[1]: Reloading.
Nov 29 05:02:35 compute-0 systemd-rc-local-generator[39129]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:02:35 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 05:02:36 compute-0 sudo[39062]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:36 compute-0 python3.9[40423]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:02:37 compute-0 python3.9[41419]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 05:02:38 compute-0 python3.9[42265]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:02:39 compute-0 sudo[43224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cidwenfsiortmtxdqbugrdohbgnwifoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392558.7335925-435-270589090900743/AnsiballZ_command.py'
Nov 29 05:02:39 compute-0 sudo[43224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:39 compute-0 python3.9[43240]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:02:39 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 05:02:39 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 05:02:39 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.984s CPU time.
Nov 29 05:02:39 compute-0 systemd[1]: run-rbcb73c96f7fb44f195e91c23fd0ba4ed.service: Deactivated successfully.
Nov 29 05:02:39 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 05:02:39 compute-0 systemd[1]: Starting Authorization Manager...
Nov 29 05:02:39 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 05:02:39 compute-0 polkitd[43510]: Started polkitd version 0.117
Nov 29 05:02:39 compute-0 polkitd[43510]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 05:02:39 compute-0 polkitd[43510]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 05:02:39 compute-0 polkitd[43510]: Finished loading, compiling and executing 2 rules
Nov 29 05:02:39 compute-0 systemd[1]: Started Authorization Manager.
Nov 29 05:02:39 compute-0 polkitd[43510]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 29 05:02:39 compute-0 sudo[43224]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:40 compute-0 sudo[43678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yahuloxnquyicmcsezozmjtydpfcwoib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392560.1394382-444-223888620313973/AnsiballZ_systemd.py'
Nov 29 05:02:40 compute-0 sudo[43678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:40 compute-0 python3.9[43680]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:02:40 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 05:02:40 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 05:02:40 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 05:02:40 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 05:02:41 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 05:02:41 compute-0 sudo[43678]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:41 compute-0 python3.9[43842]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 05:02:44 compute-0 sudo[43992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjeoigbkubttmexmqzmwcvkomlurfxhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392563.7634768-501-18209225784283/AnsiballZ_systemd.py'
Nov 29 05:02:44 compute-0 sudo[43992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:44 compute-0 python3.9[43994]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:02:44 compute-0 systemd[1]: Reloading.
Nov 29 05:02:44 compute-0 systemd-rc-local-generator[44022]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:02:44 compute-0 sudo[43992]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:44 compute-0 systemd[1]: Starting dnf makecache...
Nov 29 05:02:44 compute-0 dnf[44032]: Failed determining last makecache time.
Nov 29 05:02:44 compute-0 dnf[44032]: delorean-openstack-barbican-42b4c41831408a8e323 114 kB/s | 3.0 kB     00:00
Nov 29 05:02:44 compute-0 dnf[44032]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 152 kB/s | 3.0 kB     00:00
Nov 29 05:02:44 compute-0 dnf[44032]: delorean-openstack-cinder-1c00d6490d88e436f26ef 157 kB/s | 3.0 kB     00:00
Nov 29 05:02:44 compute-0 dnf[44032]: delorean-python-stevedore-c4acc5639fd2329372142 157 kB/s | 3.0 kB     00:00
Nov 29 05:02:44 compute-0 dnf[44032]: delorean-python-cloudkitty-tests-tempest-2c80f8 160 kB/s | 3.0 kB     00:00
Nov 29 05:02:44 compute-0 dnf[44032]: delorean-os-net-config-9758ab42364673d01bc5014e 153 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 134 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-python-designate-tests-tempest-347fdbc 146 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-openstack-glance-1fd12c29b339f30fe823e 148 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 sudo[44192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbitaerdolkqofaoyhvlirtqwbizpsyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392564.803927-501-140328415822228/AnsiballZ_systemd.py'
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 154 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 sudo[44192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-openstack-manila-3c01b7181572c95dac462 169 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-python-whitebox-neutron-tests-tempest- 170 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-openstack-octavia-ba397f07a7331190208c 167 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-openstack-watcher-c014f81a8647287f6dcc 167 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-python-tcib-1124124ec06aadbac34f0d340b 157 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 161 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-openstack-swift-dc98a8463506ac520c469a 159 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-python-tempestconf-8515371b7cceebd4282 133 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: delorean-openstack-heat-ui-013accbfd179753bc3f0 158 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: CentOS Stream 9 - BaseOS                         78 kB/s | 7.3 kB     00:00
Nov 29 05:02:45 compute-0 python3.9[44194]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:02:45 compute-0 systemd[1]: Reloading.
Nov 29 05:02:45 compute-0 dnf[44032]: CentOS Stream 9 - AppStream                      77 kB/s | 7.4 kB     00:00
Nov 29 05:02:45 compute-0 systemd-rc-local-generator[44235]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:02:45 compute-0 sudo[44192]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:45 compute-0 dnf[44032]: CentOS Stream 9 - CRB                            83 kB/s | 7.2 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: CentOS Stream 9 - Extras packages                78 kB/s | 8.3 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: dlrn-antelope-testing                           181 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: dlrn-antelope-build-deps                        161 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: centos9-rabbitmq                                 95 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: centos9-storage                                 128 kB/s | 3.0 kB     00:00
Nov 29 05:02:45 compute-0 dnf[44032]: centos9-opstools                                132 kB/s | 3.0 kB     00:00
Nov 29 05:02:46 compute-0 dnf[44032]: NFV SIG OpenvSwitch                             137 kB/s | 3.0 kB     00:00
Nov 29 05:02:46 compute-0 dnf[44032]: repo-setup-centos-appstream                     109 kB/s | 4.4 kB     00:00
Nov 29 05:02:46 compute-0 sudo[44407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hblyvjemvwxtekqitbhlpqmknorgnfsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392565.926339-517-215345623560308/AnsiballZ_command.py'
Nov 29 05:02:46 compute-0 sudo[44407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:46 compute-0 dnf[44032]: repo-setup-centos-baseos                        162 kB/s | 3.9 kB     00:00
Nov 29 05:02:46 compute-0 dnf[44032]: repo-setup-centos-highavailability               98 kB/s | 3.9 kB     00:00
Nov 29 05:02:46 compute-0 python3.9[44409]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:02:46 compute-0 dnf[44032]: repo-setup-centos-powertools                    177 kB/s | 4.3 kB     00:00
Nov 29 05:02:46 compute-0 sudo[44407]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:46 compute-0 dnf[44032]: Extra Packages for Enterprise Linux 9 - x86_64  212 kB/s |  33 kB     00:00
Nov 29 05:02:46 compute-0 sudo[44567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdgvjlfaenqisnqdubrtzpxjioyqvwyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392566.5997264-525-17427956664522/AnsiballZ_command.py'
Nov 29 05:02:46 compute-0 sudo[44567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:47 compute-0 python3.9[44569]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:02:47 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 05:02:47 compute-0 sudo[44567]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:47 compute-0 dnf[44032]: Metadata cache created.
Nov 29 05:02:47 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 05:02:47 compute-0 systemd[1]: Finished dnf makecache.
Nov 29 05:02:47 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.717s CPU time.
Nov 29 05:02:47 compute-0 sudo[44721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbroxboryjokgfnmehdprxpogzuqwxgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392567.2607086-533-190243544670270/AnsiballZ_command.py'
Nov 29 05:02:47 compute-0 sudo[44721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:47 compute-0 python3.9[44723]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:02:49 compute-0 sudo[44721]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:49 compute-0 sudo[44883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuhnytgqwpednivfhjnbnqoojedbbkum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392569.2672036-541-210957456827748/AnsiballZ_command.py'
Nov 29 05:02:49 compute-0 sudo[44883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:49 compute-0 python3.9[44885]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:02:49 compute-0 sudo[44883]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:50 compute-0 sudo[45036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfyhmjjzhlyrkszvsbbkxowtgogfekkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392569.9430456-549-184527850670362/AnsiballZ_systemd.py'
Nov 29 05:02:50 compute-0 sudo[45036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:50 compute-0 python3.9[45038]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:02:50 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 05:02:50 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 05:02:50 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 05:02:50 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 29 05:02:50 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 05:02:50 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 29 05:02:50 compute-0 sudo[45036]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:51 compute-0 sshd-session[31355]: Connection closed by 192.168.122.30 port 42532
Nov 29 05:02:51 compute-0 sshd-session[31352]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:02:51 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 05:02:51 compute-0 systemd[1]: session-8.scope: Consumed 2min 9.959s CPU time.
Nov 29 05:02:51 compute-0 systemd-logind[793]: Session 8 logged out. Waiting for processes to exit.
Nov 29 05:02:51 compute-0 systemd-logind[793]: Removed session 8.
Nov 29 05:02:56 compute-0 sshd-session[45068]: Accepted publickey for zuul from 192.168.122.30 port 42046 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:02:56 compute-0 systemd-logind[793]: New session 9 of user zuul.
Nov 29 05:02:56 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 29 05:02:56 compute-0 sshd-session[45068]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:02:57 compute-0 python3.9[45221]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:02:58 compute-0 sudo[45375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwhseawoagpledloxiptqmdmwzkjfupe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392578.13852-36-99848160484254/AnsiballZ_getent.py'
Nov 29 05:02:58 compute-0 sudo[45375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:58 compute-0 python3.9[45377]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 05:02:58 compute-0 sudo[45375]: pam_unix(sudo:session): session closed for user root
Nov 29 05:02:59 compute-0 sudo[45528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfrrkwtkrouesmlcjbzwyqbycxfzueyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392578.951232-44-119988777582716/AnsiballZ_group.py'
Nov 29 05:02:59 compute-0 sudo[45528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:02:59 compute-0 python3.9[45530]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 05:02:59 compute-0 groupadd[45531]: group added to /etc/group: name=openvswitch, GID=42476
Nov 29 05:02:59 compute-0 groupadd[45531]: group added to /etc/gshadow: name=openvswitch
Nov 29 05:02:59 compute-0 groupadd[45531]: new group: name=openvswitch, GID=42476
Nov 29 05:02:59 compute-0 sudo[45528]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:00 compute-0 sudo[45686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yatwjkmsyigeiuhqlmhmqxnyktgipjwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392579.8622057-52-203744792323638/AnsiballZ_user.py'
Nov 29 05:03:00 compute-0 sudo[45686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:00 compute-0 python3.9[45688]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 05:03:00 compute-0 useradd[45690]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 05:03:00 compute-0 useradd[45690]: add 'openvswitch' to group 'hugetlbfs'
Nov 29 05:03:00 compute-0 useradd[45690]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 29 05:03:00 compute-0 sudo[45686]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:01 compute-0 sudo[45846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysjlvmfsnthioqobjpwecfnyxziswioy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392581.0238628-62-193455187129656/AnsiballZ_setup.py'
Nov 29 05:03:01 compute-0 sudo[45846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:01 compute-0 python3.9[45848]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:03:01 compute-0 sudo[45846]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:02 compute-0 sudo[45930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drrbouteijbwttnncnzdtdqvelzdoavp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392581.0238628-62-193455187129656/AnsiballZ_dnf.py'
Nov 29 05:03:02 compute-0 sudo[45930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:02 compute-0 python3.9[45932]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 05:03:04 compute-0 sudo[45930]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:05 compute-0 sudo[46095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcztxaacaadlniqdkvrjrfeiodrtxzzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392584.8006504-76-33499013623739/AnsiballZ_dnf.py'
Nov 29 05:03:05 compute-0 sudo[46095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:05 compute-0 python3.9[46097]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:03:16 compute-0 kernel: SELinux:  Converting 2730 SID table entries...
Nov 29 05:03:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 05:03:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 05:03:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 05:03:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 05:03:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 05:03:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 05:03:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 05:03:16 compute-0 groupadd[46120]: group added to /etc/group: name=unbound, GID=993
Nov 29 05:03:16 compute-0 groupadd[46120]: group added to /etc/gshadow: name=unbound
Nov 29 05:03:16 compute-0 groupadd[46120]: new group: name=unbound, GID=993
Nov 29 05:03:16 compute-0 useradd[46127]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 29 05:03:16 compute-0 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 05:03:16 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 05:03:17 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 05:03:17 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 05:03:18 compute-0 systemd[1]: Reloading.
Nov 29 05:03:18 compute-0 systemd-rc-local-generator[46623]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:03:18 compute-0 systemd-sysv-generator[46628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:03:18 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 05:03:18 compute-0 sudo[46095]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:18 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 05:03:18 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 05:03:18 compute-0 systemd[1]: run-r9d5a5e8d40854739bbe0c317674b6ed0.service: Deactivated successfully.
Nov 29 05:03:19 compute-0 sudo[47193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrzwcascqabpvwghlvmadvfzwylihndj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392598.9393227-84-100962897857211/AnsiballZ_systemd.py'
Nov 29 05:03:19 compute-0 sudo[47193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:19 compute-0 python3.9[47195]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 05:03:20 compute-0 systemd[1]: Reloading.
Nov 29 05:03:21 compute-0 systemd-sysv-generator[47227]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:03:21 compute-0 systemd-rc-local-generator[47220]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:03:21 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 05:03:21 compute-0 chown[47237]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 05:03:21 compute-0 ovs-ctl[47242]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 05:03:21 compute-0 ovs-ctl[47242]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 05:03:21 compute-0 ovs-ctl[47242]: Starting ovsdb-server [  OK  ]
Nov 29 05:03:21 compute-0 ovs-vsctl[47291]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 05:03:21 compute-0 ovs-vsctl[47307]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"63cfe9d2-e938-418d-9401-5d1a600b4ede\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 05:03:21 compute-0 ovs-ctl[47242]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 05:03:21 compute-0 ovs-vsctl[47313]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 05:03:21 compute-0 ovs-ctl[47242]: Enabling remote OVSDB managers [  OK  ]
Nov 29 05:03:21 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 05:03:21 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 05:03:21 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 05:03:21 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 05:03:21 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 05:03:21 compute-0 ovs-ctl[47362]: Inserting openvswitch module [  OK  ]
Nov 29 05:03:21 compute-0 ovs-ctl[47331]: Starting ovs-vswitchd [  OK  ]
Nov 29 05:03:21 compute-0 ovs-vsctl[47380]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 05:03:21 compute-0 ovs-ctl[47331]: Enabling remote OVSDB managers [  OK  ]
Nov 29 05:03:21 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 05:03:21 compute-0 systemd[1]: Starting Open vSwitch...
Nov 29 05:03:21 compute-0 systemd[1]: Finished Open vSwitch.
Nov 29 05:03:21 compute-0 sudo[47193]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:23 compute-0 python3.9[47532]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:03:23 compute-0 sudo[47682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rggufppyvpwaykohpcymtysoydwsdtkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392603.2207296-102-85485653039162/AnsiballZ_sefcontext.py'
Nov 29 05:03:23 compute-0 sudo[47682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:23 compute-0 python3.9[47684]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 05:03:25 compute-0 kernel: SELinux:  Converting 2744 SID table entries...
Nov 29 05:03:25 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 05:03:25 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 05:03:25 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 05:03:25 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 05:03:25 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 05:03:25 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 05:03:25 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 05:03:25 compute-0 sudo[47682]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:26 compute-0 python3.9[47839]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:03:26 compute-0 sudo[47995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqhmqxgxtswthjkcbdznvilluhjeymqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392606.5038333-120-264848084613631/AnsiballZ_dnf.py'
Nov 29 05:03:26 compute-0 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 05:03:26 compute-0 sudo[47995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:27 compute-0 python3.9[47997]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:03:28 compute-0 sudo[47995]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:28 compute-0 sudo[48148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgplelmsjzwpaermyiklcbbjikxzwgum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392608.5265715-128-172501579069160/AnsiballZ_command.py'
Nov 29 05:03:28 compute-0 sudo[48148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:29 compute-0 python3.9[48150]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:03:29 compute-0 sudo[48148]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:30 compute-0 sudo[48435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqlvnswpmojypdhbopsslrdwwtsuoprw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392609.9177191-136-155645326774118/AnsiballZ_file.py'
Nov 29 05:03:30 compute-0 sudo[48435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:30 compute-0 python3.9[48437]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 05:03:30 compute-0 sudo[48435]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:31 compute-0 python3.9[48587]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:03:31 compute-0 sudo[48739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcnrpvwefsawjzspeuzujlksleghynci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392611.59825-152-224251510867627/AnsiballZ_dnf.py'
Nov 29 05:03:31 compute-0 sudo[48739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:32 compute-0 python3.9[48741]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:03:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 05:03:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 05:03:33 compute-0 systemd[1]: Reloading.
Nov 29 05:03:33 compute-0 systemd-sysv-generator[48783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:03:33 compute-0 systemd-rc-local-generator[48780]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:03:34 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 05:03:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 05:03:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 05:03:34 compute-0 systemd[1]: run-r5f0dbb3c43ee4272912c43634e6528ec.service: Deactivated successfully.
Nov 29 05:03:34 compute-0 sudo[48739]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:35 compute-0 sudo[49057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tascyetseciiknbtgbfvdylbvoismoun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392614.9566512-160-159069302026798/AnsiballZ_systemd.py'
Nov 29 05:03:35 compute-0 sudo[49057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:35 compute-0 python3.9[49059]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:03:35 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 05:03:35 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 05:03:35 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 05:03:35 compute-0 NetworkManager[7200]: <info>  [1764392615.5059] caught SIGTERM, shutting down normally.
Nov 29 05:03:35 compute-0 systemd[1]: Stopping Network Manager...
Nov 29 05:03:35 compute-0 NetworkManager[7200]: <info>  [1764392615.5070] dhcp4 (eth0): canceled DHCP transaction
Nov 29 05:03:35 compute-0 NetworkManager[7200]: <info>  [1764392615.5070] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 05:03:35 compute-0 NetworkManager[7200]: <info>  [1764392615.5070] dhcp4 (eth0): state changed no lease
Nov 29 05:03:35 compute-0 NetworkManager[7200]: <info>  [1764392615.5073] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 05:03:35 compute-0 NetworkManager[7200]: <info>  [1764392615.5132] exiting (success)
Nov 29 05:03:35 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 05:03:35 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 05:03:35 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 05:03:35 compute-0 systemd[1]: Stopped Network Manager.
Nov 29 05:03:35 compute-0 systemd[1]: NetworkManager.service: Consumed 11.554s CPU time, 4.1M memory peak, read 0B from disk, written 21.5K to disk.
Nov 29 05:03:35 compute-0 systemd[1]: Starting Network Manager...
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.6082] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:919d61e4-148b-4df4-a773-feb4933c1c42)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.6083] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.6138] manager[0x5587cb08d090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 05:03:35 compute-0 systemd[1]: Starting Hostname Service...
Nov 29 05:03:35 compute-0 systemd[1]: Started Hostname Service.
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7158] hostname: hostname: using hostnamed
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7160] hostname: static hostname changed from (none) to "compute-0"
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7164] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7167] manager[0x5587cb08d090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7167] manager[0x5587cb08d090]: rfkill: WWAN hardware radio set enabled
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7185] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7193] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7194] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7194] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7195] manager: Networking is enabled by state file
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7196] settings: Loaded settings plugin: keyfile (internal)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7199] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7221] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7229] dhcp: init: Using DHCP client 'internal'
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7231] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7236] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7240] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7246] device (lo): Activation: starting connection 'lo' (aeac58a6-e034-4337-948c-d58870c36302)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7252] device (eth0): carrier: link connected
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7255] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7259] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7259] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7264] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7270] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7275] device (eth1): carrier: link connected
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7278] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7283] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4) (indicated)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7284] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7290] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7296] device (eth1): Activation: starting connection 'ci-private-network' (ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7303] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 05:03:35 compute-0 systemd[1]: Started Network Manager.
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7309] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7311] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7313] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7315] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7317] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7319] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7320] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7321] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7325] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7327] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7336] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7349] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7366] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7370] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7428] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7433] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7434] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7438] device (lo): Activation: successful, device activated.
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7695] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7697] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7699] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7703] device (eth1): Activation: successful, device activated.
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7716] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7717] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7721] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7725] device (eth0): Activation: successful, device activated.
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7730] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 05:03:35 compute-0 NetworkManager[49073]: <info>  [1764392615.7734] manager: startup complete
Nov 29 05:03:35 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 29 05:03:35 compute-0 sudo[49057]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:35 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 29 05:03:36 compute-0 sudo[49284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaylbirwnbcjnugtamcvebdhugheutey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392615.9529107-168-167464284085332/AnsiballZ_dnf.py'
Nov 29 05:03:36 compute-0 sudo[49284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:36 compute-0 python3.9[49286]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:03:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 05:03:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 05:03:43 compute-0 systemd[1]: Reloading.
Nov 29 05:03:43 compute-0 systemd-rc-local-generator[49335]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:03:43 compute-0 systemd-sysv-generator[49338]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:03:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 05:03:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 05:03:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 05:03:44 compute-0 systemd[1]: run-r3f3c4d21c5f04aa6950471f271280895.service: Deactivated successfully.
Nov 29 05:03:44 compute-0 sudo[49284]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:44 compute-0 sudo[49744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfqnpwjxxvenhipqicsarcwanpqerpig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392624.6715887-180-171648540097421/AnsiballZ_stat.py'
Nov 29 05:03:44 compute-0 sudo[49744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:45 compute-0 python3.9[49746]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:03:45 compute-0 sudo[49744]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:45 compute-0 sudo[49896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmeywqddrcofhbxfrnqldminblcjwqgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392625.4326708-189-195400404003776/AnsiballZ_ini_file.py'
Nov 29 05:03:45 compute-0 sudo[49896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:45 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 05:03:46 compute-0 python3.9[49898]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:03:46 compute-0 sudo[49896]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:46 compute-0 sudo[50050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-optyzjtxjfigdzmutogxsountszgoaxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392626.3577175-199-68088390035165/AnsiballZ_ini_file.py'
Nov 29 05:03:46 compute-0 sudo[50050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:46 compute-0 python3.9[50052]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:03:46 compute-0 sudo[50050]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:47 compute-0 sudo[50202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fspqzruhuefwhofqxnkrvwtxdzwnrvrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392627.1579502-199-164836468633506/AnsiballZ_ini_file.py'
Nov 29 05:03:47 compute-0 sudo[50202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:47 compute-0 python3.9[50204]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:03:47 compute-0 sudo[50202]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:48 compute-0 sudo[50354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkeuebmyfmyxzptcbplrmpajkwlftedx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392627.8786006-214-107832459059924/AnsiballZ_ini_file.py'
Nov 29 05:03:48 compute-0 sudo[50354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:48 compute-0 python3.9[50356]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:03:48 compute-0 sudo[50354]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:48 compute-0 sudo[50506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwvqykxpjwfegrdcbezdjmskzktechao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392628.5095026-214-174114347310156/AnsiballZ_ini_file.py'
Nov 29 05:03:48 compute-0 sudo[50506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:48 compute-0 python3.9[50508]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:03:48 compute-0 sudo[50506]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:49 compute-0 sudo[50658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsbxtpfwjwohkkyvgczienlqakubeark ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392629.1658833-229-70588590808844/AnsiballZ_stat.py'
Nov 29 05:03:49 compute-0 sudo[50658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:49 compute-0 python3.9[50660]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:03:49 compute-0 sudo[50658]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:50 compute-0 sudo[50781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etafyaftplzyutxkdmeddcqkrggqqypk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392629.1658833-229-70588590808844/AnsiballZ_copy.py'
Nov 29 05:03:50 compute-0 sudo[50781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:50 compute-0 python3.9[50783]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392629.1658833-229-70588590808844/.source _original_basename=.n18vukfd follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:03:50 compute-0 sudo[50781]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:50 compute-0 sudo[50933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvlkmaadgwsneotnlacbalftjzyxdeem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392630.5269146-244-94755969567598/AnsiballZ_file.py'
Nov 29 05:03:50 compute-0 sudo[50933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:50 compute-0 python3.9[50935]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:03:50 compute-0 sudo[50933]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:51 compute-0 sudo[51085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcpvqhardibyazhpffaokbnmekxitgir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392631.1813612-252-273969180895832/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 29 05:03:51 compute-0 sudo[51085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:51 compute-0 python3.9[51087]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 05:03:51 compute-0 sudo[51085]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:52 compute-0 sudo[51237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avcvgjtorljczcrvmbjoxqdefnnftfnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392632.172064-261-59927399914972/AnsiballZ_file.py'
Nov 29 05:03:52 compute-0 sudo[51237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:52 compute-0 python3.9[51239]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:03:52 compute-0 sudo[51237]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:53 compute-0 sudo[51389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfcdxgzllgaojlgpqivkmdtcdzphvpjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392633.0303817-271-79893859907059/AnsiballZ_stat.py'
Nov 29 05:03:53 compute-0 sudo[51389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:53 compute-0 sudo[51389]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:54 compute-0 sudo[51512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnyqvhpdfqyjnovhspyliwzbmnzvuqqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392633.0303817-271-79893859907059/AnsiballZ_copy.py'
Nov 29 05:03:54 compute-0 sudo[51512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:54 compute-0 sudo[51512]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:55 compute-0 sudo[51664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mashzbhlokzkuwqgisxawoexkclybscg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392634.643084-286-225309301101868/AnsiballZ_slurp.py'
Nov 29 05:03:55 compute-0 sudo[51664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:55 compute-0 python3.9[51666]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 05:03:55 compute-0 sudo[51664]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:56 compute-0 sudo[51839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhmkkeicgletwcjviebkjhaspmihzaja ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392635.5922756-295-165022383322458/async_wrapper.py j810879445122 300 /home/zuul/.ansible/tmp/ansible-tmp-1764392635.5922756-295-165022383322458/AnsiballZ_edpm_os_net_config.py _'
Nov 29 05:03:56 compute-0 sudo[51839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:03:56 compute-0 ansible-async_wrapper.py[51841]: Invoked with j810879445122 300 /home/zuul/.ansible/tmp/ansible-tmp-1764392635.5922756-295-165022383322458/AnsiballZ_edpm_os_net_config.py _
Nov 29 05:03:56 compute-0 ansible-async_wrapper.py[51844]: Starting module and watcher
Nov 29 05:03:56 compute-0 ansible-async_wrapper.py[51844]: Start watching 51845 (300)
Nov 29 05:03:56 compute-0 ansible-async_wrapper.py[51845]: Start module (51845)
Nov 29 05:03:56 compute-0 ansible-async_wrapper.py[51841]: Return async_wrapper task started.
Nov 29 05:03:56 compute-0 sudo[51839]: pam_unix(sudo:session): session closed for user root
Nov 29 05:03:56 compute-0 python3.9[51846]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 05:03:57 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 05:03:57 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 05:03:57 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 05:03:57 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 05:03:57 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6018] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6030] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6444] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6446] audit: op="connection-add" uuid="7134afe0-a31f-4294-bb07-316f3a9e03e9" name="br-ex-br" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6459] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6461] audit: op="connection-add" uuid="4bf186af-c248-48ba-a07d-3c0e65d194df" name="br-ex-port" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6470] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6472] audit: op="connection-add" uuid="bd97148d-e4f7-4765-87ee-c00ec35a7ccc" name="eth1-port" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6482] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6483] audit: op="connection-add" uuid="f0efa15f-cb45-45d2-bb7d-16b52fe1d2b2" name="vlan20-port" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6494] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6496] audit: op="connection-add" uuid="9d83f726-2e99-487a-a917-6d4c8d3c35c4" name="vlan21-port" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6505] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6506] audit: op="connection-add" uuid="56d4e035-1f1c-402e-a0b9-5300d1d08bf7" name="vlan22-port" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6516] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6518] audit: op="connection-add" uuid="d8366d4d-6eb8-4359-a534-94e4585031a4" name="vlan23-port" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6535] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6549] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6551] audit: op="connection-add" uuid="ab610bb0-cf0e-449c-b95c-b2b3a1383e00" name="br-ex-if" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6600] audit: op="connection-update" uuid="ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4" name="ci-private-network" args="ipv4.method,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv4.never-default,ipv4.routes,ovs-external-ids.data,connection.controller,connection.master,connection.port-type,connection.slave-type,connection.timestamp,ipv6.method,ipv6.addr-gen-mode,ipv6.addresses,ipv6.dns,ipv6.routes,ipv6.routing-rules,ovs-interface.type" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6615] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6616] audit: op="connection-add" uuid="c413ae8e-9915-4a9f-ae3c-de6da5b56e0e" name="vlan20-if" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6630] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6632] audit: op="connection-add" uuid="80d50742-f63a-4985-aaeb-ea9f89dcf489" name="vlan21-if" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6645] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6646] audit: op="connection-add" uuid="be0c2d3d-189e-4692-a4f8-0b760f1e6e68" name="vlan22-if" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6661] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6662] audit: op="connection-add" uuid="a2d772d0-ce56-40ed-b7f3-df914f508e4e" name="vlan23-if" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6672] audit: op="connection-delete" uuid="68471d98-bb78-39be-9a57-275a98f2e1d6" name="Wired connection 1" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6682] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6691] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6695] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (7134afe0-a31f-4294-bb07-316f3a9e03e9)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6696] audit: op="connection-activate" uuid="7134afe0-a31f-4294-bb07-316f3a9e03e9" name="br-ex-br" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6697] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6703] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6707] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (4bf186af-c248-48ba-a07d-3c0e65d194df)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6708] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6714] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6717] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (bd97148d-e4f7-4765-87ee-c00ec35a7ccc)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6719] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6725] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6729] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (f0efa15f-cb45-45d2-bb7d-16b52fe1d2b2)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6730] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6736] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6739] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (9d83f726-2e99-487a-a917-6d4c8d3c35c4)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6741] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6747] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6751] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (56d4e035-1f1c-402e-a0b9-5300d1d08bf7)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6753] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6758] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6762] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (d8366d4d-6eb8-4359-a534-94e4585031a4)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6763] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6765] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6767] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6772] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6777] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6780] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ab610bb0-cf0e-449c-b95c-b2b3a1383e00)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6781] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6784] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6786] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6787] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6788] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6798] device (eth1): disconnecting for new activation request.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6798] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6801] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6803] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6804] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6807] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6811] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6815] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c413ae8e-9915-4a9f-ae3c-de6da5b56e0e)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6815] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6818] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6820] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6821] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6824] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6828] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6832] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (80d50742-f63a-4985-aaeb-ea9f89dcf489)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6833] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6835] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6837] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6838] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6840] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6845] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6849] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (be0c2d3d-189e-4692-a4f8-0b760f1e6e68)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6850] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6852] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6854] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6855] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6858] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6862] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6866] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (a2d772d0-ce56-40ed-b7f3-df914f508e4e)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6867] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6870] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6871] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6873] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6874] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6884] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6886] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6889] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6891] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6896] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6899] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6903] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6906] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6908] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6912] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6916] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6919] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6920] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6925] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6929] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6932] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6933] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6938] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6941] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6944] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6946] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6950] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 kernel: Timeout policy base is empty
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6953] dhcp4 (eth0): canceled DHCP transaction
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6953] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6953] dhcp4 (eth0): state changed no lease
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6954] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 05:03:58 compute-0 systemd-udevd[51852]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6962] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.6964] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51847 uid=0 result="fail" reason="Device is not activated"
Nov 29 05:03:58 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7006] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7010] dhcp4 (eth0): state changed new lease, address=38.102.83.17
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7017] device (eth1): disconnecting for new activation request.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7017] audit: op="connection-activate" uuid="ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4" name="ci-private-network" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7057] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7065] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7071] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7085] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51847 uid=0 result="success"
Nov 29 05:03:58 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7164] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7270] device (eth1): Activation: starting connection 'ci-private-network' (ec874bcb-0345-5eb4-84dc-dc5a2c0a75f4)
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7282] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7286] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7294] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7296] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7298] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7300] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7303] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7305] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7306] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7317] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7324] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7327] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7332] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7337] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7340] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7344] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7348] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7352] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7356] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7361] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7365] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7368] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7372] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 kernel: br-ex: entered promiscuous mode
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7377] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7385] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7393] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7450] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7456] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7465] device (eth1): Activation: successful, device activated.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7519] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7540] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 kernel: vlan22: entered promiscuous mode
Nov 29 05:03:58 compute-0 kernel: vlan23: entered promiscuous mode
Nov 29 05:03:58 compute-0 systemd-udevd[51853]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7650] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7651] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7659] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 05:03:58 compute-0 kernel: vlan21: entered promiscuous mode
Nov 29 05:03:58 compute-0 systemd-udevd[51851]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7754] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7767] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 kernel: vlan20: entered promiscuous mode
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7785] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7787] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7804] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7815] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7824] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7880] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7881] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7885] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7893] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7906] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7916] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7934] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7975] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7976] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7977] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7983] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7987] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 05:03:58 compute-0 NetworkManager[49073]: <info>  [1764392638.7993] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 05:03:59 compute-0 sshd-session[48788]: ssh_dispatch_run_fatal: Connection from 101.47.141.125 port 44800: Connection timed out [preauth]
Nov 29 05:03:59 compute-0 NetworkManager[49073]: <info>  [1764392639.9335] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51847 uid=0 result="success"
Nov 29 05:04:00 compute-0 NetworkManager[49073]: <info>  [1764392640.1078] checkpoint[0x5587cb063950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 05:04:00 compute-0 NetworkManager[49073]: <info>  [1764392640.1081] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51847 uid=0 result="success"
Nov 29 05:04:00 compute-0 sudo[52204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixigmqdvztcanlhxtynbovluqdrvrwpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392639.7284393-295-115423635309268/AnsiballZ_async_status.py'
Nov 29 05:04:00 compute-0 sudo[52204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:00 compute-0 NetworkManager[49073]: <info>  [1764392640.3744] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51847 uid=0 result="success"
Nov 29 05:04:00 compute-0 NetworkManager[49073]: <info>  [1764392640.3753] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51847 uid=0 result="success"
Nov 29 05:04:00 compute-0 python3.9[52207]: ansible-ansible.legacy.async_status Invoked with jid=j810879445122.51841 mode=status _async_dir=/root/.ansible_async
Nov 29 05:04:00 compute-0 sudo[52204]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:00 compute-0 NetworkManager[49073]: <info>  [1764392640.5648] audit: op="networking-control" arg="global-dns-configuration" pid=51847 uid=0 result="success"
Nov 29 05:04:00 compute-0 NetworkManager[49073]: <info>  [1764392640.5678] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 05:04:00 compute-0 NetworkManager[49073]: <info>  [1764392640.5703] audit: op="networking-control" arg="global-dns-configuration" pid=51847 uid=0 result="success"
Nov 29 05:04:00 compute-0 NetworkManager[49073]: <info>  [1764392640.5737] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51847 uid=0 result="success"
Nov 29 05:04:00 compute-0 NetworkManager[49073]: <info>  [1764392640.7294] checkpoint[0x5587cb063a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 05:04:00 compute-0 NetworkManager[49073]: <info>  [1764392640.7302] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51847 uid=0 result="success"
Nov 29 05:04:00 compute-0 ansible-async_wrapper.py[51845]: Module complete (51845)
Nov 29 05:04:01 compute-0 ansible-async_wrapper.py[51844]: Done in kid B.
Nov 29 05:04:03 compute-0 sudo[52309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvgoxlklbqkmsuwwhxanyyirqrjmveuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392639.7284393-295-115423635309268/AnsiballZ_async_status.py'
Nov 29 05:04:03 compute-0 sudo[52309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:03 compute-0 python3.9[52311]: ansible-ansible.legacy.async_status Invoked with jid=j810879445122.51841 mode=status _async_dir=/root/.ansible_async
Nov 29 05:04:04 compute-0 sudo[52309]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:04 compute-0 sudo[52409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksodtpvzffkbklywiimmqbhzpcyswkcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392639.7284393-295-115423635309268/AnsiballZ_async_status.py'
Nov 29 05:04:04 compute-0 sudo[52409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:04 compute-0 python3.9[52411]: ansible-ansible.legacy.async_status Invoked with jid=j810879445122.51841 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 05:04:04 compute-0 sudo[52409]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:05 compute-0 sudo[52561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcxaviktszbjxmfkgjueqlplcdofjatd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392644.831004-322-54733926492473/AnsiballZ_stat.py'
Nov 29 05:04:05 compute-0 sudo[52561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:05 compute-0 python3.9[52563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:04:05 compute-0 sudo[52561]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:05 compute-0 sudo[52684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-equblfhzbjxnjmaadbxsihjlyvpxtyhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392644.831004-322-54733926492473/AnsiballZ_copy.py'
Nov 29 05:04:05 compute-0 sudo[52684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:05 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 05:04:05 compute-0 python3.9[52686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392644.831004-322-54733926492473/.source.returncode _original_basename=.sumd37f6 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:04:05 compute-0 sudo[52684]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:06 compute-0 sudo[52838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xavgcslhcmfujuqoldjzngkceasgfhav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392646.0377102-338-46512492591641/AnsiballZ_stat.py'
Nov 29 05:04:06 compute-0 sudo[52838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:06 compute-0 python3.9[52840]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:04:06 compute-0 sudo[52838]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:06 compute-0 sudo[52961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sffxuzeygigtlahvqxujnyjmggsmtynb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392646.0377102-338-46512492591641/AnsiballZ_copy.py'
Nov 29 05:04:06 compute-0 sudo[52961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:07 compute-0 python3.9[52963]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392646.0377102-338-46512492591641/.source.cfg _original_basename=.sbmwoq28 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:04:07 compute-0 sudo[52961]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:07 compute-0 sudo[53114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwiunmylqltgbjygmkpcctgcvpsnyrkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392647.2989361-353-209157733999632/AnsiballZ_systemd.py'
Nov 29 05:04:07 compute-0 sudo[53114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:07 compute-0 python3.9[53116]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:04:07 compute-0 systemd[1]: Reloading Network Manager...
Nov 29 05:04:07 compute-0 NetworkManager[49073]: <info>  [1764392647.9441] audit: op="reload" arg="0" pid=53120 uid=0 result="success"
Nov 29 05:04:07 compute-0 NetworkManager[49073]: <info>  [1764392647.9453] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 05:04:07 compute-0 systemd[1]: Reloaded Network Manager.
Nov 29 05:04:07 compute-0 sudo[53114]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:08 compute-0 sshd-session[45071]: Connection closed by 192.168.122.30 port 42046
Nov 29 05:04:08 compute-0 sshd-session[45068]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:04:08 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 05:04:08 compute-0 systemd[1]: session-9.scope: Consumed 48.949s CPU time.
Nov 29 05:04:08 compute-0 systemd-logind[793]: Session 9 logged out. Waiting for processes to exit.
Nov 29 05:04:08 compute-0 systemd-logind[793]: Removed session 9.
Nov 29 05:04:14 compute-0 sshd-session[53152]: Accepted publickey for zuul from 192.168.122.30 port 42154 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:04:14 compute-0 systemd-logind[793]: New session 10 of user zuul.
Nov 29 05:04:14 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 29 05:04:14 compute-0 sshd-session[53152]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:04:15 compute-0 python3.9[53305]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:04:16 compute-0 python3.9[53459]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:04:17 compute-0 python3.9[53653]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:04:17 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 05:04:18 compute-0 sshd-session[53155]: Connection closed by 192.168.122.30 port 42154
Nov 29 05:04:18 compute-0 sshd-session[53152]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:04:18 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 05:04:18 compute-0 systemd[1]: session-10.scope: Consumed 2.335s CPU time.
Nov 29 05:04:18 compute-0 systemd-logind[793]: Session 10 logged out. Waiting for processes to exit.
Nov 29 05:04:18 compute-0 systemd-logind[793]: Removed session 10.
Nov 29 05:04:24 compute-0 sshd-session[53683]: Accepted publickey for zuul from 192.168.122.30 port 43540 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:04:24 compute-0 systemd-logind[793]: New session 11 of user zuul.
Nov 29 05:04:24 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 29 05:04:24 compute-0 sshd-session[53683]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:04:25 compute-0 python3.9[53836]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:04:26 compute-0 python3.9[53990]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:04:26 compute-0 sudo[54145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpmmtkirqepexlpomharzhwrljpfthsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392666.60815-40-259187791698649/AnsiballZ_setup.py'
Nov 29 05:04:26 compute-0 sudo[54145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:27 compute-0 python3.9[54147]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:04:27 compute-0 sudo[54145]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:28 compute-0 sudo[54229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsxgdtwwezlvuspycsgeutzgtfvyjzla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392666.60815-40-259187791698649/AnsiballZ_dnf.py'
Nov 29 05:04:28 compute-0 sudo[54229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:28 compute-0 python3.9[54231]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:04:29 compute-0 sudo[54229]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:30 compute-0 sudo[54383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chagqvqxiipivirevlksdbdagjpuygpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392669.659679-52-10367402154246/AnsiballZ_setup.py'
Nov 29 05:04:30 compute-0 sudo[54383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:30 compute-0 python3.9[54385]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:04:30 compute-0 sudo[54383]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:31 compute-0 sudo[54578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbrtmwqgoalavpcvhrtmclthkxypnqlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392671.014957-63-128862028625364/AnsiballZ_file.py'
Nov 29 05:04:31 compute-0 sudo[54578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:31 compute-0 python3.9[54580]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:04:31 compute-0 sudo[54578]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:32 compute-0 sudo[54730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wysdszywkzvybdxtoyawxrfkpehxwdva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392671.8482087-71-105567140845769/AnsiballZ_command.py'
Nov 29 05:04:32 compute-0 sudo[54730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:32 compute-0 python3.9[54732]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2518162104-merged.mount: Deactivated successfully.
Nov 29 05:04:32 compute-0 podman[54733]: 2025-11-29 05:04:32.560585852 +0000 UTC m=+0.043444690 system refresh
Nov 29 05:04:32 compute-0 sudo[54730]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:33 compute-0 sudo[54893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrqcerftawwdnrjdvqjeobwdpfqefctd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392672.7645547-79-253364804622671/AnsiballZ_stat.py'
Nov 29 05:04:33 compute-0 sudo[54893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:33 compute-0 python3.9[54895]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:04:33 compute-0 sudo[54893]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:04:33 compute-0 sudo[55016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpszxiznwcpjxqsnlwjkadialwjjnksu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392672.7645547-79-253364804622671/AnsiballZ_copy.py'
Nov 29 05:04:33 compute-0 sudo[55016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:34 compute-0 python3.9[55018]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392672.7645547-79-253364804622671/.source.json follow=False _original_basename=podman_network_config.j2 checksum=66982087fa23b413eb440583f0a34253a177e035 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:04:34 compute-0 sudo[55016]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:34 compute-0 sudo[55168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jraupxnhahhavtcxzbrpuutydfjoepww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392674.2821767-94-32409837467201/AnsiballZ_stat.py'
Nov 29 05:04:34 compute-0 sudo[55168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:34 compute-0 python3.9[55170]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:04:34 compute-0 sudo[55168]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:35 compute-0 sudo[55291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydzylqhfgthhfihmhinpqwdytxcocajs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392674.2821767-94-32409837467201/AnsiballZ_copy.py'
Nov 29 05:04:35 compute-0 sudo[55291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:35 compute-0 python3.9[55293]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764392674.2821767-94-32409837467201/.source.conf follow=False _original_basename=registries.conf.j2 checksum=b723c254c5347521a0bd9978182359a7d08823fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:04:35 compute-0 sudo[55291]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:36 compute-0 sudo[55443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvhuuivffrwerjzgwqqolpdelqfhgcrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392675.5931916-110-31912375522000/AnsiballZ_ini_file.py'
Nov 29 05:04:36 compute-0 sudo[55443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:36 compute-0 python3.9[55445]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:04:36 compute-0 sudo[55443]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:36 compute-0 sudo[55595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyfemjhvemrvdgjhvsiuyebywttykzcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392676.4258206-110-207838105497036/AnsiballZ_ini_file.py'
Nov 29 05:04:36 compute-0 sudo[55595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:36 compute-0 python3.9[55597]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:04:37 compute-0 sudo[55595]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:37 compute-0 sudo[55747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttcepuhcunoqjtlhqbfiymlcfucugshs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392677.1702418-110-98349825392919/AnsiballZ_ini_file.py'
Nov 29 05:04:37 compute-0 sudo[55747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:37 compute-0 python3.9[55749]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:04:37 compute-0 sudo[55747]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:38 compute-0 sudo[55899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htekmlzoyafnoaddgbxmnlwndwytoqfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392677.8810158-110-66927089562720/AnsiballZ_ini_file.py'
Nov 29 05:04:38 compute-0 sudo[55899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:38 compute-0 python3.9[55901]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:04:38 compute-0 sudo[55899]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:39 compute-0 sudo[56051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfepsrhiirkhufnlbjmvxrqkftmanrwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392678.93859-141-13521621486184/AnsiballZ_dnf.py'
Nov 29 05:04:39 compute-0 sudo[56051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:39 compute-0 python3.9[56053]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:04:40 compute-0 sudo[56051]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:41 compute-0 sudo[56204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnsgzzpfknjmnnnevhpinnjgzcvrnubd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392681.3030696-152-224721388794702/AnsiballZ_setup.py'
Nov 29 05:04:41 compute-0 sudo[56204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:41 compute-0 python3.9[56206]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:04:41 compute-0 sudo[56204]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:42 compute-0 sudo[56358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjqkqyywesmjxkuqmuntuycoieqqusdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392682.0758708-160-99718864187858/AnsiballZ_stat.py'
Nov 29 05:04:42 compute-0 sudo[56358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:42 compute-0 python3.9[56360]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:04:42 compute-0 sudo[56358]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:43 compute-0 sudo[56510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwrskwthohgdtjqujiaghblnainzpaff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392682.874382-169-105707207384826/AnsiballZ_stat.py'
Nov 29 05:04:43 compute-0 sudo[56510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:43 compute-0 python3.9[56512]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:04:43 compute-0 sudo[56510]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:44 compute-0 sudo[56662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtlqzauuhqifjnobaldunqmmfyzqzvyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392683.787266-179-268327044473582/AnsiballZ_command.py'
Nov 29 05:04:44 compute-0 sudo[56662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:44 compute-0 python3.9[56664]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:04:44 compute-0 sudo[56662]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:45 compute-0 sudo[56815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtthyawfbgvoghacdeeajimzdgpypreq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392684.4570994-189-211870906333650/AnsiballZ_service_facts.py'
Nov 29 05:04:45 compute-0 sudo[56815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:45 compute-0 python3.9[56817]: ansible-service_facts Invoked
Nov 29 05:04:45 compute-0 network[56834]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 05:04:45 compute-0 network[56835]: 'network-scripts' will be removed from distribution in near future.
Nov 29 05:04:45 compute-0 network[56836]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 05:04:49 compute-0 sudo[56815]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:51 compute-0 sudo[57119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvqqfbgsqycbiluukcibuvpczrrtcset ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764392690.5971146-204-8270967848457/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764392690.5971146-204-8270967848457/args'
Nov 29 05:04:51 compute-0 sudo[57119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:51 compute-0 sudo[57119]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:51 compute-0 sudo[57286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkkrjbmeojazbjivphfcxhhiwwpbopha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392691.3920717-215-47514671728920/AnsiballZ_dnf.py'
Nov 29 05:04:51 compute-0 sudo[57286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:51 compute-0 python3.9[57288]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:04:53 compute-0 sudo[57286]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:54 compute-0 sudo[57439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbpagpkpmwuussknispktofitnczgyoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392693.41798-228-30258932145446/AnsiballZ_package_facts.py'
Nov 29 05:04:54 compute-0 sudo[57439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:54 compute-0 python3.9[57441]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 05:04:54 compute-0 sudo[57439]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:55 compute-0 sudo[57591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znzhwxwsjktjwgtkztbgqjpwebnbsiib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392695.2488394-238-205140311183879/AnsiballZ_stat.py'
Nov 29 05:04:55 compute-0 sudo[57591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:55 compute-0 python3.9[57593]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:04:55 compute-0 sudo[57591]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:56 compute-0 sudo[57716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohszrgzgumogojzfgrgvesxvgbofhmbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392695.2488394-238-205140311183879/AnsiballZ_copy.py'
Nov 29 05:04:56 compute-0 sudo[57716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:56 compute-0 python3.9[57718]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392695.2488394-238-205140311183879/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:04:56 compute-0 sudo[57716]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:56 compute-0 sudo[57870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evvdhpnxrdysholsussacfadiwzympct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392696.6753645-253-269650056647101/AnsiballZ_stat.py'
Nov 29 05:04:56 compute-0 sudo[57870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:57 compute-0 python3.9[57872]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:04:57 compute-0 sudo[57870]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:57 compute-0 sudo[57995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmoxmfgmnehuaimeyfhvpwmjwqgpztzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392696.6753645-253-269650056647101/AnsiballZ_copy.py'
Nov 29 05:04:57 compute-0 sudo[57995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:57 compute-0 python3.9[57997]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392696.6753645-253-269650056647101/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:04:57 compute-0 sudo[57995]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:58 compute-0 sudo[58149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saeocchzfxqqexnjaqjplbpfhvihxnyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392698.2543852-274-120263186822341/AnsiballZ_lineinfile.py'
Nov 29 05:04:58 compute-0 sudo[58149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:04:58 compute-0 python3.9[58151]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:04:58 compute-0 sudo[58149]: pam_unix(sudo:session): session closed for user root
Nov 29 05:04:59 compute-0 sudo[58303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glzwudkalgcjokbbdugrzegxpjsecawt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392699.6791875-289-235149083534184/AnsiballZ_setup.py'
Nov 29 05:04:59 compute-0 sudo[58303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:00 compute-0 python3.9[58305]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:05:00 compute-0 sudo[58303]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:01 compute-0 sudo[58387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbgymmsqhtueccsqgucazrbedpgaapxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392699.6791875-289-235149083534184/AnsiballZ_systemd.py'
Nov 29 05:05:01 compute-0 sudo[58387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:01 compute-0 python3.9[58389]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:05:01 compute-0 sudo[58387]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:02 compute-0 sudo[58541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iahxptuqauyrngussrheuftnfcidkaer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392701.996565-305-182733469691980/AnsiballZ_setup.py'
Nov 29 05:05:02 compute-0 sudo[58541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:02 compute-0 python3.9[58543]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:05:02 compute-0 sudo[58541]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:03 compute-0 sudo[58625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpocehdgndrhwfbivpbwcucbcvbxeudw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392701.996565-305-182733469691980/AnsiballZ_systemd.py'
Nov 29 05:05:03 compute-0 sudo[58625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:03 compute-0 python3.9[58627]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:05:03 compute-0 chronyd[785]: chronyd exiting
Nov 29 05:05:03 compute-0 systemd[1]: Stopping NTP client/server...
Nov 29 05:05:03 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 05:05:03 compute-0 systemd[1]: Stopped NTP client/server.
Nov 29 05:05:03 compute-0 systemd[1]: Starting NTP client/server...
Nov 29 05:05:03 compute-0 chronyd[58635]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 05:05:03 compute-0 chronyd[58635]: Frequency -23.273 +/- 0.238 ppm read from /var/lib/chrony/drift
Nov 29 05:05:03 compute-0 chronyd[58635]: Loaded seccomp filter (level 2)
Nov 29 05:05:03 compute-0 systemd[1]: Started NTP client/server.
Nov 29 05:05:03 compute-0 sudo[58625]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:04 compute-0 sshd-session[53686]: Connection closed by 192.168.122.30 port 43540
Nov 29 05:05:04 compute-0 sshd-session[53683]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:05:04 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 05:05:04 compute-0 systemd[1]: session-11.scope: Consumed 26.081s CPU time.
Nov 29 05:05:04 compute-0 systemd-logind[793]: Session 11 logged out. Waiting for processes to exit.
Nov 29 05:05:04 compute-0 systemd-logind[793]: Removed session 11.
Nov 29 05:05:09 compute-0 sshd-session[58661]: Accepted publickey for zuul from 192.168.122.30 port 50202 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:05:09 compute-0 systemd-logind[793]: New session 12 of user zuul.
Nov 29 05:05:09 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 29 05:05:09 compute-0 sshd-session[58661]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:05:10 compute-0 sudo[58814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ritikjyqctoldxnchhzeingjfadttzzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392709.877126-22-124293050996944/AnsiballZ_file.py'
Nov 29 05:05:10 compute-0 sudo[58814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:10 compute-0 python3.9[58816]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:10 compute-0 sudo[58814]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:11 compute-0 sudo[58966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oznfjejynmfsxcecjrtkwhctajvcodtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392710.7651577-34-172661159434291/AnsiballZ_stat.py'
Nov 29 05:05:11 compute-0 sudo[58966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:11 compute-0 python3.9[58968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:11 compute-0 sudo[58966]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:11 compute-0 sudo[59089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynbrmbftliokldkwfyzcuubrrnszmnya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392710.7651577-34-172661159434291/AnsiballZ_copy.py'
Nov 29 05:05:11 compute-0 sudo[59089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:12 compute-0 python3.9[59091]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392710.7651577-34-172661159434291/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:12 compute-0 sudo[59089]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:12 compute-0 sshd-session[58664]: Connection closed by 192.168.122.30 port 50202
Nov 29 05:05:12 compute-0 sshd-session[58661]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:05:12 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 05:05:12 compute-0 systemd[1]: session-12.scope: Consumed 1.635s CPU time.
Nov 29 05:05:12 compute-0 systemd-logind[793]: Session 12 logged out. Waiting for processes to exit.
Nov 29 05:05:12 compute-0 systemd-logind[793]: Removed session 12.
Nov 29 05:05:17 compute-0 sshd-session[59116]: Accepted publickey for zuul from 192.168.122.30 port 50214 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:05:17 compute-0 systemd-logind[793]: New session 13 of user zuul.
Nov 29 05:05:17 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 29 05:05:17 compute-0 sshd-session[59116]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:05:18 compute-0 python3.9[59269]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:05:19 compute-0 sudo[59423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcmfsdfwgcxrokjrxlronxkaizjkaktq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392718.8940067-33-248746661577964/AnsiballZ_file.py'
Nov 29 05:05:19 compute-0 sudo[59423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:19 compute-0 python3.9[59425]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:19 compute-0 sudo[59423]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:20 compute-0 sudo[59598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opnpwvupcrvqknhbbngosismxedlssih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392719.8379722-41-240430673059201/AnsiballZ_stat.py'
Nov 29 05:05:20 compute-0 sudo[59598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:20 compute-0 python3.9[59600]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:20 compute-0 sudo[59598]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:21 compute-0 sudo[59721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uduzolxcehkjigxwjrwhqkvbueaepnhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392719.8379722-41-240430673059201/AnsiballZ_copy.py'
Nov 29 05:05:21 compute-0 sudo[59721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:21 compute-0 python3.9[59723]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764392719.8379722-41-240430673059201/.source.json _original_basename=.04p3dp1p follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:21 compute-0 sudo[59721]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:22 compute-0 sudo[59873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsimsrohjklcgtzzwznjfblhbqkhjyrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392721.952187-64-215496796275300/AnsiballZ_stat.py'
Nov 29 05:05:22 compute-0 sudo[59873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:22 compute-0 python3.9[59875]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:22 compute-0 sudo[59873]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:23 compute-0 sudo[59996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfdwlbliiqtgwhpehuinwfqobqgarqac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392721.952187-64-215496796275300/AnsiballZ_copy.py'
Nov 29 05:05:23 compute-0 sudo[59996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:23 compute-0 python3.9[59998]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392721.952187-64-215496796275300/.source _original_basename=.vmt1b2vi follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:23 compute-0 sudo[59996]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:23 compute-0 sudo[60148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaornqyfqlambqycxitbunymqqxpitcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392723.55391-80-270209629428777/AnsiballZ_file.py'
Nov 29 05:05:23 compute-0 sudo[60148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:24 compute-0 python3.9[60150]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:05:24 compute-0 sudo[60148]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:24 compute-0 sudo[60300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiefvkthukrjgkfzsgtfsiygaziqdfcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392724.3063686-88-248172249780150/AnsiballZ_stat.py'
Nov 29 05:05:24 compute-0 sudo[60300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:24 compute-0 python3.9[60302]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:24 compute-0 sudo[60300]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:25 compute-0 sudo[60423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qepfvouviziosklqapmkklfyjeudbvcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392724.3063686-88-248172249780150/AnsiballZ_copy.py'
Nov 29 05:05:25 compute-0 sudo[60423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:25 compute-0 python3.9[60425]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764392724.3063686-88-248172249780150/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:05:25 compute-0 sudo[60423]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:26 compute-0 sudo[60575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcthpuypsuvhwsdusdxbiphokqquyfek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392725.7499294-88-268025956672618/AnsiballZ_stat.py'
Nov 29 05:05:26 compute-0 sudo[60575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:26 compute-0 python3.9[60577]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:26 compute-0 sudo[60575]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:26 compute-0 sudo[60698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lffguckmsfrkhuddyjopqfymptiyvree ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392725.7499294-88-268025956672618/AnsiballZ_copy.py'
Nov 29 05:05:26 compute-0 sudo[60698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:26 compute-0 python3.9[60700]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764392725.7499294-88-268025956672618/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:05:26 compute-0 sudo[60698]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:27 compute-0 sudo[60850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vejfrtxgnmeutfyesrymqtjzbpvfbgbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392727.206935-117-231889808808904/AnsiballZ_file.py'
Nov 29 05:05:27 compute-0 sudo[60850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:27 compute-0 python3.9[60852]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:27 compute-0 sudo[60850]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:28 compute-0 sudo[61002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvolfdfrdtepwlxndckwxmyliufzyesl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392728.0305223-125-274526036898020/AnsiballZ_stat.py'
Nov 29 05:05:28 compute-0 sudo[61002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:28 compute-0 python3.9[61004]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:28 compute-0 sudo[61002]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:28 compute-0 sudo[61125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwpqeuglvgkjyarvytujsonkbjxoztvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392728.0305223-125-274526036898020/AnsiballZ_copy.py'
Nov 29 05:05:28 compute-0 sudo[61125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:29 compute-0 python3.9[61127]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392728.0305223-125-274526036898020/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:29 compute-0 sudo[61125]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:29 compute-0 sudo[61277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqtnlqexeaefndcnmsyefzktyenwpbxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392729.4679744-140-274158157628362/AnsiballZ_stat.py'
Nov 29 05:05:29 compute-0 sudo[61277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:30 compute-0 python3.9[61279]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:30 compute-0 sudo[61277]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:30 compute-0 sudo[61400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfofxujpqavtfqvmwkfpazthxlqhdnnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392729.4679744-140-274158157628362/AnsiballZ_copy.py'
Nov 29 05:05:30 compute-0 sudo[61400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:30 compute-0 python3.9[61402]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392729.4679744-140-274158157628362/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:30 compute-0 sudo[61400]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:31 compute-0 sudo[61552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orclfklwpxwjrqsppwfdeiirurwuofla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392731.0522144-155-190304247411931/AnsiballZ_systemd.py'
Nov 29 05:05:31 compute-0 sudo[61552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:32 compute-0 python3.9[61554]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:05:32 compute-0 systemd[1]: Reloading.
Nov 29 05:05:32 compute-0 systemd-sysv-generator[61585]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:05:32 compute-0 systemd-rc-local-generator[61579]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:05:32 compute-0 systemd[1]: Reloading.
Nov 29 05:05:32 compute-0 systemd-rc-local-generator[61620]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:05:32 compute-0 systemd-sysv-generator[61623]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:05:32 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 05:05:32 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 05:05:32 compute-0 sudo[61552]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:33 compute-0 sudo[61780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntkzblfzwjkzobfbdtzxeelkmrnuaqic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392732.8973076-163-81156012680332/AnsiballZ_stat.py'
Nov 29 05:05:33 compute-0 sudo[61780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:33 compute-0 python3.9[61782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:33 compute-0 sudo[61780]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:34 compute-0 sudo[61903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwcaoipkfouprxibdatunslqyllbddcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392732.8973076-163-81156012680332/AnsiballZ_copy.py'
Nov 29 05:05:34 compute-0 sudo[61903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:34 compute-0 python3.9[61905]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392732.8973076-163-81156012680332/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:34 compute-0 sudo[61903]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:34 compute-0 sudo[62055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqgqmhayflgehfwilthvedsbxnhnpmcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392734.428242-178-169115281750235/AnsiballZ_stat.py'
Nov 29 05:05:34 compute-0 sudo[62055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:35 compute-0 python3.9[62057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:35 compute-0 sudo[62055]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:35 compute-0 sudo[62178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdyjfcuzslpirgbpqfxcmkzcbcphbdns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392734.428242-178-169115281750235/AnsiballZ_copy.py'
Nov 29 05:05:35 compute-0 sudo[62178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:35 compute-0 python3.9[62180]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392734.428242-178-169115281750235/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:35 compute-0 sudo[62178]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:36 compute-0 sudo[62330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scgseurcqtpghovhsjlkstnygsnmtqbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392735.9741096-193-280660014265133/AnsiballZ_systemd.py'
Nov 29 05:05:36 compute-0 sudo[62330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:36 compute-0 python3.9[62332]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:05:36 compute-0 systemd[1]: Reloading.
Nov 29 05:05:36 compute-0 systemd-rc-local-generator[62359]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:05:36 compute-0 systemd-sysv-generator[62363]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:05:36 compute-0 systemd[1]: Reloading.
Nov 29 05:05:37 compute-0 systemd-rc-local-generator[62390]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:05:37 compute-0 systemd-sysv-generator[62394]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:05:37 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 05:05:37 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 05:05:37 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 05:05:37 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 05:05:37 compute-0 sudo[62330]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:38 compute-0 python3.9[62556]: ansible-ansible.builtin.service_facts Invoked
Nov 29 05:05:38 compute-0 network[62573]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 05:05:38 compute-0 network[62574]: 'network-scripts' will be removed from distribution in near future.
Nov 29 05:05:38 compute-0 network[62575]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 05:05:42 compute-0 sudo[62835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffwpzebjyljlzwhdltpguiqcanqxuxri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392741.7903197-209-125296216470396/AnsiballZ_systemd.py'
Nov 29 05:05:42 compute-0 sudo[62835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:42 compute-0 python3.9[62837]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:05:42 compute-0 systemd[1]: Reloading.
Nov 29 05:05:42 compute-0 systemd-sysv-generator[62864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:05:42 compute-0 systemd-rc-local-generator[62861]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:05:42 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 05:05:43 compute-0 iptables.init[62877]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 05:05:43 compute-0 iptables.init[62877]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 05:05:43 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 05:05:43 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 05:05:43 compute-0 sudo[62835]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:43 compute-0 sudo[63071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwccgjqbbgegfdcgkalpnfpcemzrziph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392743.3507698-209-44316198003469/AnsiballZ_systemd.py'
Nov 29 05:05:43 compute-0 sudo[63071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:44 compute-0 python3.9[63073]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:05:44 compute-0 sudo[63071]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:44 compute-0 sudo[63225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgagbiiiigbyfgwqnschbeneutjwfgyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392744.454291-225-107528641991465/AnsiballZ_systemd.py'
Nov 29 05:05:44 compute-0 sudo[63225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:45 compute-0 python3.9[63227]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:05:45 compute-0 systemd[1]: Reloading.
Nov 29 05:05:45 compute-0 systemd-rc-local-generator[63255]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:05:45 compute-0 systemd-sysv-generator[63258]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:05:45 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 29 05:05:45 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 29 05:05:45 compute-0 sudo[63225]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:46 compute-0 sudo[63416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjnzhhkkmeatpaoapweihzeimdngncgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392745.7065244-233-67975427318292/AnsiballZ_command.py'
Nov 29 05:05:46 compute-0 sudo[63416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:46 compute-0 python3.9[63418]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:05:46 compute-0 sudo[63416]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:47 compute-0 sudo[63569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmibsvceyxekyslnnrikjfuqbhjsngii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392746.9505754-247-102569168809996/AnsiballZ_stat.py'
Nov 29 05:05:47 compute-0 sudo[63569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:47 compute-0 python3.9[63571]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:47 compute-0 sudo[63569]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:47 compute-0 sudo[63694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htpfkdzhooeiyipgmjgojqscbqenurlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392746.9505754-247-102569168809996/AnsiballZ_copy.py'
Nov 29 05:05:47 compute-0 sudo[63694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:48 compute-0 python3.9[63696]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392746.9505754-247-102569168809996/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:48 compute-0 sudo[63694]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:48 compute-0 sudo[63847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvgfcayeaedvkrbwnhrcparbavugtrvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392748.4503636-262-92816180756052/AnsiballZ_systemd.py'
Nov 29 05:05:48 compute-0 sudo[63847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:49 compute-0 python3.9[63849]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:05:49 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 05:05:49 compute-0 sshd[1004]: Received SIGHUP; restarting.
Nov 29 05:05:49 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 05:05:49 compute-0 sshd[1004]: Server listening on 0.0.0.0 port 22.
Nov 29 05:05:49 compute-0 sshd[1004]: Server listening on :: port 22.
Nov 29 05:05:49 compute-0 sudo[63847]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:49 compute-0 sudo[64003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfuxkdxfzuqenkravplmvdhimdxjiyry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392749.4958332-270-52191242032469/AnsiballZ_file.py'
Nov 29 05:05:49 compute-0 sudo[64003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:50 compute-0 python3.9[64005]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:50 compute-0 sudo[64003]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:50 compute-0 sudo[64155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbyazdxkntkhfwcptrnofqwnnxdbhqup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392750.296178-278-87649035487829/AnsiballZ_stat.py'
Nov 29 05:05:50 compute-0 sudo[64155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:50 compute-0 python3.9[64157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:50 compute-0 sudo[64155]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:51 compute-0 sudo[64278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuolsdctythszeqhnjbmkrbwoyylsjhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392750.296178-278-87649035487829/AnsiballZ_copy.py'
Nov 29 05:05:51 compute-0 sudo[64278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:51 compute-0 python3.9[64280]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392750.296178-278-87649035487829/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:51 compute-0 sudo[64278]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:52 compute-0 sudo[64430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnigwscxjsjcfncnauhovvkthmffqmig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392751.894082-296-148367995618298/AnsiballZ_timezone.py'
Nov 29 05:05:52 compute-0 sudo[64430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:52 compute-0 python3.9[64432]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 05:05:52 compute-0 systemd[1]: Starting Time & Date Service...
Nov 29 05:05:52 compute-0 systemd[1]: Started Time & Date Service.
Nov 29 05:05:52 compute-0 sudo[64430]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:53 compute-0 sudo[64587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oevgoxngqyfoyagyxnzghkqymkoydhcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392753.1513302-305-248307793133332/AnsiballZ_file.py'
Nov 29 05:05:53 compute-0 sudo[64587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:53 compute-0 python3.9[64589]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:53 compute-0 sudo[64587]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:54 compute-0 sudo[64739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zijncepqppqtfpeljazaqprvheukutka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392754.0648236-313-255410796355777/AnsiballZ_stat.py'
Nov 29 05:05:54 compute-0 sudo[64739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:54 compute-0 python3.9[64741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:54 compute-0 sudo[64739]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:54 compute-0 sudo[64862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtsgaurmogrznlxzclsupfculnnoktef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392754.0648236-313-255410796355777/AnsiballZ_copy.py'
Nov 29 05:05:54 compute-0 sudo[64862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:55 compute-0 python3.9[64864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392754.0648236-313-255410796355777/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:55 compute-0 sudo[64862]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:55 compute-0 sudo[65016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkfhxqdfsuieqgnkfyysxmzqhhwujapx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392755.4021316-328-12794856841765/AnsiballZ_stat.py'
Nov 29 05:05:55 compute-0 sudo[65016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:56 compute-0 python3.9[65018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:56 compute-0 sudo[65016]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:56 compute-0 sudo[65139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owpsdalamkhzmysbsktzrxwwemyftysn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392755.4021316-328-12794856841765/AnsiballZ_copy.py'
Nov 29 05:05:56 compute-0 sudo[65139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:56 compute-0 python3.9[65141]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764392755.4021316-328-12794856841765/.source.yaml _original_basename=.jvdpze1n follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:56 compute-0 sudo[65139]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:57 compute-0 sudo[65291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umaitsxjehpbwhcjvibhcwcbiulxkzdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392756.8597713-343-43015515002686/AnsiballZ_stat.py'
Nov 29 05:05:57 compute-0 sudo[65291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:57 compute-0 python3.9[65293]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:05:57 compute-0 sudo[65291]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:57 compute-0 sudo[65414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcvbmotwtytpihalenuyhdkxbryzsajx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392756.8597713-343-43015515002686/AnsiballZ_copy.py'
Nov 29 05:05:57 compute-0 sudo[65414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:58 compute-0 python3.9[65416]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392756.8597713-343-43015515002686/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:05:58 compute-0 sudo[65414]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:58 compute-0 sudo[65566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsyythdepyjnzhmdyjnsetyyahqgipqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392758.282011-358-168449171237613/AnsiballZ_command.py'
Nov 29 05:05:58 compute-0 sudo[65566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:58 compute-0 python3.9[65568]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:05:58 compute-0 sudo[65566]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:59 compute-0 sudo[65719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eycchcyjivyeaxwkxulewprncwkrahar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392759.0684905-366-9898253561308/AnsiballZ_command.py'
Nov 29 05:05:59 compute-0 sudo[65719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:05:59 compute-0 python3.9[65721]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:05:59 compute-0 sudo[65719]: pam_unix(sudo:session): session closed for user root
Nov 29 05:05:59 compute-0 sshd-session[64988]: Connection closed by 101.47.141.125 port 50204 [preauth]
Nov 29 05:06:00 compute-0 sudo[65872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbtrqvsovmtnzbhpynbtvugnwviwfwzi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764392759.8954365-374-255183748986321/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 05:06:00 compute-0 sudo[65872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:00 compute-0 python3[65874]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 05:06:00 compute-0 sudo[65872]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:01 compute-0 sudo[66024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhafatnyrspgexbilbwhuwttxvcllklk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392760.9234476-382-89658620873407/AnsiballZ_stat.py'
Nov 29 05:06:01 compute-0 sudo[66024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:01 compute-0 python3.9[66026]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:06:01 compute-0 sudo[66024]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:02 compute-0 sudo[66147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvusdlgztkpkioyoxawflyvqralfbomx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392760.9234476-382-89658620873407/AnsiballZ_copy.py'
Nov 29 05:06:02 compute-0 sudo[66147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:02 compute-0 python3.9[66149]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392760.9234476-382-89658620873407/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:02 compute-0 sudo[66147]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:02 compute-0 sudo[66299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taigaupsanlapnmpgfboshnldxgskvpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392762.5360312-397-61039317549493/AnsiballZ_stat.py'
Nov 29 05:06:02 compute-0 sudo[66299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:03 compute-0 python3.9[66301]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:06:03 compute-0 sudo[66299]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:03 compute-0 sudo[66422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjewqrvgqxsxmesuyxqljuwcoecxdkic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392762.5360312-397-61039317549493/AnsiballZ_copy.py'
Nov 29 05:06:03 compute-0 sudo[66422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:03 compute-0 python3.9[66424]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392762.5360312-397-61039317549493/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:03 compute-0 sudo[66422]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:04 compute-0 sudo[66574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwfkbonkaodbhqlkwyxrdfqacddbqmdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392763.8853915-412-104477720383442/AnsiballZ_stat.py'
Nov 29 05:06:04 compute-0 sudo[66574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:04 compute-0 python3.9[66576]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:06:04 compute-0 sudo[66574]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:04 compute-0 sudo[66697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qguqrrhfjoejzsmlnibqbjmvzgetmtdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392763.8853915-412-104477720383442/AnsiballZ_copy.py'
Nov 29 05:06:04 compute-0 sudo[66697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:05 compute-0 python3.9[66699]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392763.8853915-412-104477720383442/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:05 compute-0 sudo[66697]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:05 compute-0 sudo[66849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtpjttzsyhvrayhrgmomvnhnvvvyxnio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392765.2667303-427-239337597213465/AnsiballZ_stat.py'
Nov 29 05:06:05 compute-0 sudo[66849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:05 compute-0 python3.9[66851]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:06:05 compute-0 sudo[66849]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:06 compute-0 sudo[66972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjtpwzxbmrygbahgdqkozfghmezcebyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392765.2667303-427-239337597213465/AnsiballZ_copy.py'
Nov 29 05:06:06 compute-0 sudo[66972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:06 compute-0 python3.9[66974]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392765.2667303-427-239337597213465/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:06 compute-0 sudo[66972]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:06 compute-0 sudo[67124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dijslknojmtnbrbflhcmxlxtvvflxrnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392766.492575-442-203027630620069/AnsiballZ_stat.py'
Nov 29 05:06:06 compute-0 sudo[67124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:07 compute-0 python3.9[67126]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:06:07 compute-0 sudo[67124]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:07 compute-0 sudo[67247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnwsqjnooisyazemqqxenfgbxllmmgvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392766.492575-442-203027630620069/AnsiballZ_copy.py'
Nov 29 05:06:07 compute-0 sudo[67247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:07 compute-0 python3.9[67249]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764392766.492575-442-203027630620069/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:07 compute-0 sudo[67247]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:08 compute-0 sudo[67399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkygmergazmchprchmwcaxhfamhzvwqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392768.0105653-457-61939893211049/AnsiballZ_file.py'
Nov 29 05:06:08 compute-0 sudo[67399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:08 compute-0 python3.9[67401]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:08 compute-0 sudo[67399]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:09 compute-0 sudo[67551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqlalzhihblfnrukabknrjrjxpyrpvfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392768.8565419-465-38728350201178/AnsiballZ_command.py'
Nov 29 05:06:09 compute-0 sudo[67551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:09 compute-0 python3.9[67553]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:06:09 compute-0 sudo[67551]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:10 compute-0 sudo[67710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jssldedkvjxoubrphyspygasuaqxmdfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392769.746943-473-260060526128622/AnsiballZ_blockinfile.py'
Nov 29 05:06:10 compute-0 sudo[67710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:10 compute-0 python3.9[67712]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:10 compute-0 sudo[67710]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:11 compute-0 sudo[67863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sytdiipkosjgznbhukpwbvyoqnvbeshl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392770.8585157-482-229508425010631/AnsiballZ_file.py'
Nov 29 05:06:11 compute-0 sudo[67863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:11 compute-0 python3.9[67865]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:11 compute-0 sudo[67863]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:11 compute-0 sudo[68015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orgemoorcxipaxdiwjizacjdsvbogzwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392771.5647376-482-100385830791702/AnsiballZ_file.py'
Nov 29 05:06:11 compute-0 sudo[68015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:12 compute-0 python3.9[68017]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:12 compute-0 sudo[68015]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:12 compute-0 sudo[68167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgftohtxelwevnktpiylizoegquehqyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392772.3957381-497-234112967016617/AnsiballZ_mount.py'
Nov 29 05:06:12 compute-0 sudo[68167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:13 compute-0 python3.9[68169]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 05:06:13 compute-0 sudo[68167]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:13 compute-0 sudo[68320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmelxnxuvrghivdwiqldfttfxnbcjikb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392773.4100058-497-33534700826009/AnsiballZ_mount.py'
Nov 29 05:06:13 compute-0 sudo[68320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:14 compute-0 python3.9[68322]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 05:06:14 compute-0 sudo[68320]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:14 compute-0 sshd-session[59119]: Connection closed by 192.168.122.30 port 50214
Nov 29 05:06:14 compute-0 sshd-session[59116]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:06:14 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 05:06:14 compute-0 systemd[1]: session-13.scope: Consumed 41.017s CPU time.
Nov 29 05:06:14 compute-0 systemd-logind[793]: Session 13 logged out. Waiting for processes to exit.
Nov 29 05:06:14 compute-0 systemd-logind[793]: Removed session 13.
Nov 29 05:06:20 compute-0 sshd-session[68348]: Accepted publickey for zuul from 192.168.122.30 port 39158 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:06:20 compute-0 systemd-logind[793]: New session 14 of user zuul.
Nov 29 05:06:20 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 29 05:06:20 compute-0 sshd-session[68348]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:06:21 compute-0 sudo[68501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoujtgvcfnshcrvyixdjczjqxtpgmvro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392780.6169946-16-232350075573242/AnsiballZ_tempfile.py'
Nov 29 05:06:21 compute-0 sudo[68501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:21 compute-0 python3.9[68503]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 05:06:21 compute-0 sudo[68501]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:22 compute-0 sudo[68653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfimyskhuwzfoiouvqyfdynjhkgbnghj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392781.676673-28-91918482200249/AnsiballZ_stat.py'
Nov 29 05:06:22 compute-0 sudo[68653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:22 compute-0 python3.9[68655]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:06:22 compute-0 sudo[68653]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:22 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 05:06:23 compute-0 sudo[68807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbknmsnkomqoaxxgovnbxcwlotxnvwys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392782.707982-38-62403536640923/AnsiballZ_setup.py'
Nov 29 05:06:23 compute-0 sudo[68807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:23 compute-0 python3.9[68809]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:06:23 compute-0 sudo[68807]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:24 compute-0 sudo[68959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwvshprhsvvbpagfpcbsevvgxchtlums ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392783.840393-47-174500894801200/AnsiballZ_blockinfile.py'
Nov 29 05:06:24 compute-0 sudo[68959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:24 compute-0 python3.9[68961]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMckHMduWmwA/jneofKzqltVrdb/vEVNoPwADfQfHjxo2ViAjKtzRJxQm+bTvpTXgt3d3GaLwohXhYMtcnWss0rEYtIGMLiXWJAB76Vi4azFd32Hy0mDTGhpqL5tz3X/QJFmASZVWlpRz77RZoFzhuMtQpF581gmKi8QLN3n4kyPvi8IBRjIvdbSyN1hkk5nbYZFrdOhA0K7FLalaYs9fIyoD0rH+dijNp/mY8EbyOAWiPIFfzMZWqy9OkXlUKH6233dlpLGCHfD1uwqM55rv7g+qtOrKiOnqkc5b24MfjM3Dq8B/kIR3GisItM2fI/avStY0whFRyYPTqysal5H+pXy5+QCOGwsWv0POhypuwSVSbtY3NcfizytHcPT2Au6g3Xx/Gazoxx4fVkVLTjtzhz8URfMzAclsZVcUxtFyZlGHtoXumLkWdYeLYQA4dqkQVL7KwOEQp31HXuBfsc98k/UoOj9+SAEbQrLsEBhRXTSsD2bL350GMA7poDjiSC1k=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwQmzwqCS97U8wjy82krUlVUeH2sOvejp9p1btw+sbe
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHbvzG6Snia8dc8X++wUykISUD7zTpLyaTM0CVExLn67fyxHoL2pCwIcx6cP7HnIRC6S3Et2Ooooe+xc0kenKn0=
                                             create=True mode=0644 path=/tmp/ansible.pwh8t6zh state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:24 compute-0 sudo[68959]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:25 compute-0 sudo[69111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucniczsjytyyhacksnbppexwiewglrmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392784.6068802-55-168343441936825/AnsiballZ_command.py'
Nov 29 05:06:25 compute-0 sudo[69111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:25 compute-0 python3.9[69113]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.pwh8t6zh' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:06:25 compute-0 sudo[69111]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:25 compute-0 sudo[69265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwujrmbhbcdhcenlwtlaavujrmyrlddt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392785.4827542-63-240681460430727/AnsiballZ_file.py'
Nov 29 05:06:25 compute-0 sudo[69265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:26 compute-0 python3.9[69267]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.pwh8t6zh state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:26 compute-0 sudo[69265]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:26 compute-0 sshd-session[68351]: Connection closed by 192.168.122.30 port 39158
Nov 29 05:06:26 compute-0 sshd-session[68348]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:06:26 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 05:06:26 compute-0 systemd[1]: session-14.scope: Consumed 3.591s CPU time.
Nov 29 05:06:26 compute-0 systemd-logind[793]: Session 14 logged out. Waiting for processes to exit.
Nov 29 05:06:26 compute-0 systemd-logind[793]: Removed session 14.
Nov 29 05:06:31 compute-0 sshd-session[69292]: Accepted publickey for zuul from 192.168.122.30 port 39094 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:06:31 compute-0 systemd-logind[793]: New session 15 of user zuul.
Nov 29 05:06:31 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 29 05:06:31 compute-0 sshd-session[69292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:06:32 compute-0 python3.9[69445]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:06:33 compute-0 sudo[69599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goolyubecygzaokolnupzlfrhhejttsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392793.1744504-32-158493589532006/AnsiballZ_systemd.py'
Nov 29 05:06:33 compute-0 sudo[69599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:34 compute-0 python3.9[69601]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 05:06:34 compute-0 sudo[69599]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:34 compute-0 sudo[69753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmlsrtyjzddleglklhotvzmcsknfkpre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392794.5154505-40-16236828291496/AnsiballZ_systemd.py'
Nov 29 05:06:34 compute-0 sudo[69753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:35 compute-0 python3.9[69755]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:06:35 compute-0 sudo[69753]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:35 compute-0 sudo[69906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bggdktspsmhzrfyfhlbggzuoksqaqbvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392795.4335678-49-75766401998581/AnsiballZ_command.py'
Nov 29 05:06:35 compute-0 sudo[69906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:36 compute-0 python3.9[69908]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:06:36 compute-0 sudo[69906]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:37 compute-0 sudo[70059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqqhgjzdjsvvfbyrgqyuegfblezbxkkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392796.4837556-57-120885270092420/AnsiballZ_stat.py'
Nov 29 05:06:37 compute-0 sudo[70059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:37 compute-0 python3.9[70061]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:06:37 compute-0 sudo[70059]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:37 compute-0 sudo[70213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipaodlcyptnznorolpbblyyjisxakzyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392797.4334857-65-1497354548565/AnsiballZ_command.py'
Nov 29 05:06:37 compute-0 sudo[70213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:37 compute-0 python3.9[70215]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:06:38 compute-0 sudo[70213]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:38 compute-0 sudo[70368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrbasvkquudaizmzmxvwehiffzdjxdun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392798.2589166-73-215508437649596/AnsiballZ_file.py'
Nov 29 05:06:38 compute-0 sudo[70368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:39 compute-0 python3.9[70370]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:06:39 compute-0 sudo[70368]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:39 compute-0 sshd-session[69295]: Connection closed by 192.168.122.30 port 39094
Nov 29 05:06:39 compute-0 sshd-session[69292]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:06:39 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 05:06:39 compute-0 systemd[1]: session-15.scope: Consumed 5.153s CPU time.
Nov 29 05:06:39 compute-0 systemd-logind[793]: Session 15 logged out. Waiting for processes to exit.
Nov 29 05:06:39 compute-0 systemd-logind[793]: Removed session 15.
Nov 29 05:06:44 compute-0 sshd-session[70395]: Accepted publickey for zuul from 192.168.122.30 port 38560 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:06:44 compute-0 systemd-logind[793]: New session 16 of user zuul.
Nov 29 05:06:44 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 29 05:06:44 compute-0 sshd-session[70395]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:06:45 compute-0 python3.9[70548]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:06:46 compute-0 sudo[70702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cctskzllwhmwjdfcbqfmhzfvywfjugcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392806.2027702-34-280727867002519/AnsiballZ_setup.py'
Nov 29 05:06:46 compute-0 sudo[70702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:46 compute-0 python3.9[70704]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:06:47 compute-0 sudo[70702]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:47 compute-0 sudo[70786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnupvfnjezmmtpmbslmenogcyciexpsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764392806.2027702-34-280727867002519/AnsiballZ_dnf.py'
Nov 29 05:06:47 compute-0 sudo[70786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:06:47 compute-0 python3.9[70788]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 05:06:49 compute-0 sudo[70786]: pam_unix(sudo:session): session closed for user root
Nov 29 05:06:49 compute-0 python3.9[70939]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:06:51 compute-0 python3.9[71090]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 05:06:52 compute-0 python3.9[71240]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:06:52 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:06:52 compute-0 python3.9[71391]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:06:53 compute-0 sshd-session[70398]: Connection closed by 192.168.122.30 port 38560
Nov 29 05:06:53 compute-0 sshd-session[70395]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:06:53 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 05:06:53 compute-0 systemd[1]: session-16.scope: Consumed 5.935s CPU time.
Nov 29 05:06:53 compute-0 systemd-logind[793]: Session 16 logged out. Waiting for processes to exit.
Nov 29 05:06:53 compute-0 systemd-logind[793]: Removed session 16.
Nov 29 05:07:01 compute-0 sshd-session[71416]: Accepted publickey for zuul from 38.102.83.113 port 57408 ssh2: RSA SHA256:claowykt67vOzr+EIqjbzPN7v3ZYSs573uWOdaK+kuE
Nov 29 05:07:01 compute-0 systemd-logind[793]: New session 17 of user zuul.
Nov 29 05:07:01 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 29 05:07:01 compute-0 sshd-session[71416]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:07:01 compute-0 sudo[71492]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qahgixfrvfxrecjtqspytiynnvsbeoma ; /usr/bin/python3'
Nov 29 05:07:01 compute-0 sudo[71492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:02 compute-0 useradd[71496]: new group: name=ceph-admin, GID=42478
Nov 29 05:07:02 compute-0 useradd[71496]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 29 05:07:02 compute-0 sudo[71492]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:02 compute-0 sudo[71578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycfhlyvdyyncvxpstrqtajoewvbfcnmx ; /usr/bin/python3'
Nov 29 05:07:02 compute-0 sudo[71578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:02 compute-0 sudo[71578]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:02 compute-0 sudo[71651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqkyxfwpftqcwsgojndbqbrafxgiivao ; /usr/bin/python3'
Nov 29 05:07:02 compute-0 sudo[71651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:03 compute-0 sudo[71651]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:03 compute-0 sudo[71701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jofhxmtebxqypoiknrndtdixiejbrugv ; /usr/bin/python3'
Nov 29 05:07:03 compute-0 sudo[71701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:03 compute-0 sudo[71701]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:03 compute-0 sudo[71727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjgfnsxulbjnzsspfwzvjlczevopzoey ; /usr/bin/python3'
Nov 29 05:07:03 compute-0 sudo[71727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:03 compute-0 sudo[71727]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:03 compute-0 sudo[71753]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srwxyaactgrkeeyjhtxvkbzqcacaifmi ; /usr/bin/python3'
Nov 29 05:07:03 compute-0 sudo[71753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:04 compute-0 sudo[71753]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:04 compute-0 sudo[71779]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrqcnhlnfsqjmmenixffxuawgfwyalgp ; /usr/bin/python3'
Nov 29 05:07:04 compute-0 sudo[71779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:04 compute-0 sudo[71779]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:04 compute-0 sudo[71857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsemuxooevnombjnwqogggwrrxssesmg ; /usr/bin/python3'
Nov 29 05:07:04 compute-0 sudo[71857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:04 compute-0 sudo[71857]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:05 compute-0 sudo[71930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gokrwufcbxputcvslxvfkbgajokhjzrv ; /usr/bin/python3'
Nov 29 05:07:05 compute-0 sudo[71930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:05 compute-0 sudo[71930]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:05 compute-0 sudo[72032]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdlnpqsjcioluvhjtfgfddgnkfnjcwfd ; /usr/bin/python3'
Nov 29 05:07:05 compute-0 sudo[72032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:05 compute-0 sudo[72032]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:05 compute-0 sudo[72105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjocakwvgpjdvvrgyaihaiiksdmbeufv ; /usr/bin/python3'
Nov 29 05:07:05 compute-0 sudo[72105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:06 compute-0 sudo[72105]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:06 compute-0 sudo[72155]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmqfwixvfrajmysswyercuqgmzmwlkqr ; /usr/bin/python3'
Nov 29 05:07:06 compute-0 sudo[72155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:06 compute-0 python3[72157]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:07:07 compute-0 sudo[72155]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:08 compute-0 sudo[72250]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zctlkawmvrpitmpdehsdsxadcligsqum ; /usr/bin/python3'
Nov 29 05:07:08 compute-0 sudo[72250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:08 compute-0 python3[72252]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 05:07:09 compute-0 sudo[72250]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:09 compute-0 sudo[72277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eerhewzhimkqdmjgediwyysigwfelhqp ; /usr/bin/python3'
Nov 29 05:07:09 compute-0 sudo[72277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:09 compute-0 python3[72279]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 05:07:09 compute-0 sudo[72277]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:10 compute-0 sudo[72303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qawnipbwcpxdvfnctpfggvjlqjrqwvlm ; /usr/bin/python3'
Nov 29 05:07:10 compute-0 sudo[72303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:10 compute-0 python3[72305]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:07:10 compute-0 kernel: loop: module loaded
Nov 29 05:07:10 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 29 05:07:10 compute-0 sudo[72303]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:10 compute-0 sudo[72338]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykwlfpgmpmzqgrxaykujkkrqtgranjug ; /usr/bin/python3'
Nov 29 05:07:10 compute-0 sudo[72338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:10 compute-0 python3[72340]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:07:10 compute-0 lvm[72343]: PV /dev/loop3 not used.
Nov 29 05:07:10 compute-0 lvm[72352]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 05:07:10 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 05:07:10 compute-0 sudo[72338]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:10 compute-0 lvm[72354]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 05:07:10 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 05:07:11 compute-0 sudo[72430]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjaxwdyohqekzqxnauiuuxntyoxbjshu ; /usr/bin/python3'
Nov 29 05:07:11 compute-0 sudo[72430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:11 compute-0 python3[72432]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:07:11 compute-0 sudo[72430]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:11 compute-0 sudo[72503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqifujeqxczzftmejsaqirisdkrcjcva ; /usr/bin/python3'
Nov 29 05:07:11 compute-0 sudo[72503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:11 compute-0 python3[72505]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764392830.8811626-36184-150558833563769/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:07:11 compute-0 sudo[72503]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:11 compute-0 sudo[72553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjgpcdqigmvsuunijyajhkxfzybevoxl ; /usr/bin/python3'
Nov 29 05:07:11 compute-0 sudo[72553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:12 compute-0 python3[72555]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:07:12 compute-0 systemd[1]: Reloading.
Nov 29 05:07:12 compute-0 systemd-sysv-generator[72589]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:07:12 compute-0 systemd-rc-local-generator[72586]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:07:12 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 29 05:07:12 compute-0 bash[72596]: /dev/loop3: [64513]:4194937 (/var/lib/ceph-osd-0.img)
Nov 29 05:07:12 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 29 05:07:12 compute-0 lvm[72597]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 05:07:12 compute-0 lvm[72597]: VG ceph_vg0 finished
Nov 29 05:07:12 compute-0 sudo[72553]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:12 compute-0 sudo[72621]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-intwibpsjcjacusfttqsedqyqedicpck ; /usr/bin/python3'
Nov 29 05:07:12 compute-0 sudo[72621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:12 compute-0 python3[72623]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 05:07:13 compute-0 chronyd[58635]: Selected source 137.220.55.211 (pool.ntp.org)
Nov 29 05:07:13 compute-0 sudo[72621]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:14 compute-0 sudo[72648]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aupuuyzhgpyposhztwfmhnqhjyqfadel ; /usr/bin/python3'
Nov 29 05:07:14 compute-0 sudo[72648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:14 compute-0 python3[72650]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 05:07:14 compute-0 sudo[72648]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:14 compute-0 sudo[72674]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjqlwxmeltllawfrhacmdkhdigbuusle ; /usr/bin/python3'
Nov 29 05:07:14 compute-0 sudo[72674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:14 compute-0 python3[72676]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:07:14 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 29 05:07:14 compute-0 sudo[72674]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:14 compute-0 sudo[72706]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgjypiwbqfpqkqfqxewvacvxfosplaac ; /usr/bin/python3'
Nov 29 05:07:14 compute-0 sudo[72706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:14 compute-0 python3[72708]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:07:14 compute-0 lvm[72711]: PV /dev/loop4 not used.
Nov 29 05:07:14 compute-0 lvm[72721]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 05:07:15 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 29 05:07:15 compute-0 sudo[72706]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:15 compute-0 lvm[72723]:   1 logical volume(s) in volume group "ceph_vg1" now active
Nov 29 05:07:15 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 29 05:07:15 compute-0 sudo[72799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcsawzfhibicykebnmavgvoemslhonzp ; /usr/bin/python3'
Nov 29 05:07:15 compute-0 sudo[72799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:15 compute-0 python3[72801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:07:15 compute-0 sudo[72799]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:15 compute-0 sudo[72872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdqvvkcamogymyobjwwhourynyrfdmrx ; /usr/bin/python3'
Nov 29 05:07:15 compute-0 sudo[72872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:15 compute-0 python3[72874]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764392835.133401-36211-224069913332695/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:07:15 compute-0 sudo[72872]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:15 compute-0 sudo[72922]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphianecjdqteizjxbmmfdtwyvenvhfb ; /usr/bin/python3'
Nov 29 05:07:15 compute-0 sudo[72922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:16 compute-0 python3[72924]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:07:16 compute-0 systemd[1]: Reloading.
Nov 29 05:07:16 compute-0 systemd-rc-local-generator[72957]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:07:16 compute-0 systemd-sysv-generator[72960]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:07:16 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 29 05:07:16 compute-0 bash[72964]: /dev/loop4: [64513]:4327966 (/var/lib/ceph-osd-1.img)
Nov 29 05:07:16 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 29 05:07:16 compute-0 lvm[72965]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 05:07:16 compute-0 lvm[72965]: VG ceph_vg1 finished
Nov 29 05:07:16 compute-0 sudo[72922]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:16 compute-0 sudo[72989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqzzvwhrsfihtllugqelrsmydmvzinty ; /usr/bin/python3'
Nov 29 05:07:16 compute-0 sudo[72989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:16 compute-0 python3[72991]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 05:07:18 compute-0 sudo[72989]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:18 compute-0 sudo[73016]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfqksjlmggvyfqikfqribocdraxxhvws ; /usr/bin/python3'
Nov 29 05:07:18 compute-0 sudo[73016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:18 compute-0 python3[73018]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 05:07:18 compute-0 sudo[73016]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:18 compute-0 sudo[73042]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olxvbrteirfwxuiocekcipshczlxhdzk ; /usr/bin/python3'
Nov 29 05:07:18 compute-0 sudo[73042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:18 compute-0 python3[73044]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:07:18 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 29 05:07:18 compute-0 sudo[73042]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:18 compute-0 sudo[73074]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiqobgejhjoqagubtcetbykjmaunjzls ; /usr/bin/python3'
Nov 29 05:07:18 compute-0 sudo[73074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:19 compute-0 python3[73076]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:07:19 compute-0 lvm[73079]: PV /dev/loop5 not used.
Nov 29 05:07:19 compute-0 lvm[73088]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 05:07:19 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 29 05:07:19 compute-0 sudo[73074]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:19 compute-0 lvm[73090]:   1 logical volume(s) in volume group "ceph_vg2" now active
Nov 29 05:07:19 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 29 05:07:19 compute-0 sudo[73166]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiyuuyvrhleigwbfccaqiobecvncedbn ; /usr/bin/python3'
Nov 29 05:07:19 compute-0 sudo[73166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:19 compute-0 python3[73168]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:07:19 compute-0 sudo[73166]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:19 compute-0 sudo[73239]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzhinjnendpjlzhjowzcwsjaiksrevob ; /usr/bin/python3'
Nov 29 05:07:19 compute-0 sudo[73239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:20 compute-0 python3[73241]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764392839.509975-36238-40797589490747/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:07:20 compute-0 sudo[73239]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:20 compute-0 sudo[73289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxouehuzssvebaeuebeomvcpjtpvawgg ; /usr/bin/python3'
Nov 29 05:07:20 compute-0 sudo[73289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:20 compute-0 python3[73291]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:07:20 compute-0 systemd[1]: Reloading.
Nov 29 05:07:20 compute-0 systemd-rc-local-generator[73322]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:07:20 compute-0 systemd-sysv-generator[73326]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:07:20 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 29 05:07:20 compute-0 bash[73331]: /dev/loop5: [64513]:4328625 (/var/lib/ceph-osd-2.img)
Nov 29 05:07:20 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 29 05:07:20 compute-0 lvm[73332]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 05:07:20 compute-0 lvm[73332]: VG ceph_vg2 finished
Nov 29 05:07:20 compute-0 sudo[73289]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:22 compute-0 python3[73356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:07:24 compute-0 sudo[73447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwgpuzylmbtvghrupxvyddhbgigksdhc ; /usr/bin/python3'
Nov 29 05:07:24 compute-0 sudo[73447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:25 compute-0 python3[73449]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 05:07:26 compute-0 groupadd[73455]: group added to /etc/group: name=cephadm, GID=992
Nov 29 05:07:26 compute-0 groupadd[73455]: group added to /etc/gshadow: name=cephadm
Nov 29 05:07:26 compute-0 groupadd[73455]: new group: name=cephadm, GID=992
Nov 29 05:07:26 compute-0 useradd[73462]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 29 05:07:26 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 05:07:26 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 05:07:26 compute-0 sudo[73447]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 05:07:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 05:07:27 compute-0 systemd[1]: run-rbb210d0058144053a79b82b8cc8ed591.service: Deactivated successfully.
Nov 29 05:07:27 compute-0 sudo[73558]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usfzgxuhrhbwvdmqxkhoucjwxyrhssam ; /usr/bin/python3'
Nov 29 05:07:27 compute-0 sudo[73558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:27 compute-0 python3[73560]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 05:07:27 compute-0 sudo[73558]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:27 compute-0 sudo[73586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chvavovxcyczdnlrfzvapdjjiviiaaft ; /usr/bin/python3'
Nov 29 05:07:27 compute-0 sudo[73586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:27 compute-0 python3[73588]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:28 compute-0 sudo[73586]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:28 compute-0 sudo[73650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esnxycgdtgbnyrdhekhstqsdkkslpsgg ; /usr/bin/python3'
Nov 29 05:07:28 compute-0 sudo[73650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:28 compute-0 python3[73652]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:07:28 compute-0 sudo[73650]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:28 compute-0 sudo[73676]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gakiyzlkabyrgsibqijlagvopajivlua ; /usr/bin/python3'
Nov 29 05:07:28 compute-0 sudo[73676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:28 compute-0 python3[73678]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:07:28 compute-0 sudo[73676]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:29 compute-0 sudo[73754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbqbfrydtuceofqxsbckefwnrfltlsfg ; /usr/bin/python3'
Nov 29 05:07:29 compute-0 sudo[73754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:29 compute-0 python3[73756]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:07:29 compute-0 sudo[73754]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:29 compute-0 sudo[73827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgpwaistzgzbnvhkapbtzdbdhfaqzhfb ; /usr/bin/python3'
Nov 29 05:07:29 compute-0 sudo[73827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:29 compute-0 python3[73829]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764392849.3172789-36385-221335497443026/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:07:29 compute-0 sudo[73827]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:30 compute-0 sudo[73929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-necojaftqlwwikmhjwpdpjirlajdqdlp ; /usr/bin/python3'
Nov 29 05:07:30 compute-0 sudo[73929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:30 compute-0 python3[73931]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:07:30 compute-0 sudo[73929]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:30 compute-0 sudo[74002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugtlezvajrzzhqnxtycjhabbducmjxhk ; /usr/bin/python3'
Nov 29 05:07:30 compute-0 sudo[74002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:31 compute-0 python3[74004]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764392850.3555725-36403-100945960930423/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:07:31 compute-0 sudo[74002]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:31 compute-0 sudo[74052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djurzkncrvirdfwpkhhqvmhhmfxwoerg ; /usr/bin/python3'
Nov 29 05:07:31 compute-0 sudo[74052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:31 compute-0 python3[74054]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 05:07:31 compute-0 sudo[74052]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:31 compute-0 sudo[74080]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgzwliyylupgvcepbpgbawffwpwkkdkh ; /usr/bin/python3'
Nov 29 05:07:31 compute-0 sudo[74080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:31 compute-0 python3[74082]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 05:07:31 compute-0 sudo[74080]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:32 compute-0 sudo[74108]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djccnbmpalpdrlwthjpmhkefomkspdjw ; /usr/bin/python3'
Nov 29 05:07:32 compute-0 sudo[74108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:32 compute-0 python3[74110]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 05:07:32 compute-0 sudo[74108]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:32 compute-0 sudo[74136]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maakzkkwxklsinbqswtchhphrxnfxdms ; /usr/bin/python3'
Nov 29 05:07:32 compute-0 sudo[74136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:07:32 compute-0 python3[74138]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:32 compute-0 sshd-session[74154]: Accepted publickey for ceph-admin from 192.168.122.100 port 39038 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:07:32 compute-0 systemd-logind[793]: New session 18 of user ceph-admin.
Nov 29 05:07:32 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 05:07:32 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 05:07:32 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 05:07:32 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 29 05:07:32 compute-0 systemd[74158]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:07:33 compute-0 systemd[74158]: Queued start job for default target Main User Target.
Nov 29 05:07:33 compute-0 systemd[74158]: Created slice User Application Slice.
Nov 29 05:07:33 compute-0 systemd[74158]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 05:07:33 compute-0 systemd[74158]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 05:07:33 compute-0 systemd[74158]: Reached target Paths.
Nov 29 05:07:33 compute-0 systemd[74158]: Reached target Timers.
Nov 29 05:07:33 compute-0 systemd[74158]: Starting D-Bus User Message Bus Socket...
Nov 29 05:07:33 compute-0 systemd[74158]: Starting Create User's Volatile Files and Directories...
Nov 29 05:07:33 compute-0 systemd[74158]: Listening on D-Bus User Message Bus Socket.
Nov 29 05:07:33 compute-0 systemd[74158]: Reached target Sockets.
Nov 29 05:07:33 compute-0 systemd[74158]: Finished Create User's Volatile Files and Directories.
Nov 29 05:07:33 compute-0 systemd[74158]: Reached target Basic System.
Nov 29 05:07:33 compute-0 systemd[74158]: Reached target Main User Target.
Nov 29 05:07:33 compute-0 systemd[74158]: Startup finished in 156ms.
Nov 29 05:07:33 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 29 05:07:33 compute-0 systemd[1]: Started Session 18 of User ceph-admin.
Nov 29 05:07:33 compute-0 sshd-session[74154]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:07:33 compute-0 sudo[74175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 29 05:07:33 compute-0 sudo[74175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:07:33 compute-0 sudo[74175]: pam_unix(sudo:session): session closed for user root
Nov 29 05:07:33 compute-0 sshd-session[74174]: Received disconnect from 192.168.122.100 port 39038:11: disconnected by user
Nov 29 05:07:33 compute-0 sshd-session[74174]: Disconnected from user ceph-admin 192.168.122.100 port 39038
Nov 29 05:07:33 compute-0 sshd-session[74154]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 29 05:07:33 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 05:07:33 compute-0 systemd-logind[793]: Session 18 logged out. Waiting for processes to exit.
Nov 29 05:07:33 compute-0 systemd-logind[793]: Removed session 18.
Nov 29 05:07:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2735247542-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 05:07:43 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 29 05:07:43 compute-0 systemd[74158]: Activating special unit Exit the Session...
Nov 29 05:07:43 compute-0 systemd[74158]: Stopped target Main User Target.
Nov 29 05:07:43 compute-0 systemd[74158]: Stopped target Basic System.
Nov 29 05:07:43 compute-0 systemd[74158]: Stopped target Paths.
Nov 29 05:07:43 compute-0 systemd[74158]: Stopped target Sockets.
Nov 29 05:07:43 compute-0 systemd[74158]: Stopped target Timers.
Nov 29 05:07:43 compute-0 systemd[74158]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 05:07:43 compute-0 systemd[74158]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 05:07:43 compute-0 systemd[74158]: Closed D-Bus User Message Bus Socket.
Nov 29 05:07:43 compute-0 systemd[74158]: Stopped Create User's Volatile Files and Directories.
Nov 29 05:07:43 compute-0 systemd[74158]: Removed slice User Application Slice.
Nov 29 05:07:43 compute-0 systemd[74158]: Reached target Shutdown.
Nov 29 05:07:43 compute-0 systemd[74158]: Finished Exit the Session.
Nov 29 05:07:43 compute-0 systemd[74158]: Reached target Exit the Session.
Nov 29 05:07:43 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 29 05:07:43 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 29 05:07:43 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 29 05:07:43 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 29 05:07:43 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 29 05:07:43 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 29 05:07:43 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 29 05:07:47 compute-0 podman[74212]: 2025-11-29 05:07:47.190577486 +0000 UTC m=+13.879009126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:47 compute-0 podman[74300]: 2025-11-29 05:07:47.279468559 +0000 UTC m=+0.051918607 container create f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:07:47 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 05:07:47 compute-0 systemd[1]: Started libpod-conmon-f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7.scope.
Nov 29 05:07:47 compute-0 podman[74300]: 2025-11-29 05:07:47.257194114 +0000 UTC m=+0.029644162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:47 compute-0 podman[74300]: 2025-11-29 05:07:47.410645164 +0000 UTC m=+0.183095242 container init f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:07:47 compute-0 podman[74300]: 2025-11-29 05:07:47.420969813 +0000 UTC m=+0.193419861 container start f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:07:47 compute-0 podman[74300]: 2025-11-29 05:07:47.424703302 +0000 UTC m=+0.197153360 container attach f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:07:47 compute-0 sweet_dhawan[74316]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 05:07:47 compute-0 systemd[1]: libpod-f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7.scope: Deactivated successfully.
Nov 29 05:07:47 compute-0 podman[74300]: 2025-11-29 05:07:47.723199802 +0000 UTC m=+0.495649860 container died f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 05:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d542b808386550c61e5bd03f6c29417f304a01ee2309111c51861fbd24a20eb8-merged.mount: Deactivated successfully.
Nov 29 05:07:47 compute-0 podman[74300]: 2025-11-29 05:07:47.773781546 +0000 UTC m=+0.546231624 container remove f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7 (image=quay.io/ceph/ceph:v18, name=sweet_dhawan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:07:47 compute-0 systemd[1]: libpod-conmon-f2344eb8290798031c4b3238177681e8f487e1e1b5de140610cb053dd01986f7.scope: Deactivated successfully.
Nov 29 05:07:47 compute-0 podman[74331]: 2025-11-29 05:07:47.858306753 +0000 UTC m=+0.057784977 container create 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:07:47 compute-0 systemd[1]: Started libpod-conmon-45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0.scope.
Nov 29 05:07:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:47 compute-0 podman[74331]: 2025-11-29 05:07:47.830764192 +0000 UTC m=+0.030242526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:47 compute-0 podman[74331]: 2025-11-29 05:07:47.928549608 +0000 UTC m=+0.128027832 container init 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:07:47 compute-0 podman[74331]: 2025-11-29 05:07:47.934540162 +0000 UTC m=+0.134018386 container start 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:07:47 compute-0 podman[74331]: 2025-11-29 05:07:47.937607106 +0000 UTC m=+0.137085330 container attach 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:07:47 compute-0 affectionate_gagarin[74347]: 167 167
Nov 29 05:07:47 compute-0 systemd[1]: libpod-45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0.scope: Deactivated successfully.
Nov 29 05:07:47 compute-0 podman[74331]: 2025-11-29 05:07:47.939235234 +0000 UTC m=+0.138713458 container died 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:07:47 compute-0 podman[74331]: 2025-11-29 05:07:47.976528099 +0000 UTC m=+0.176006323 container remove 45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0 (image=quay.io/ceph/ceph:v18, name=affectionate_gagarin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:07:47 compute-0 systemd[1]: libpod-conmon-45ab40595101bb3f8d4e5e90fa6fdeb323759d4a81929641c6fff68a26e967c0.scope: Deactivated successfully.
Nov 29 05:07:48 compute-0 podman[74364]: 2025-11-29 05:07:48.040188256 +0000 UTC m=+0.041724962 container create 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:07:48 compute-0 systemd[1]: Started libpod-conmon-4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac.scope.
Nov 29 05:07:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:48 compute-0 podman[74364]: 2025-11-29 05:07:48.022429569 +0000 UTC m=+0.023966285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:48 compute-0 podman[74364]: 2025-11-29 05:07:48.122019928 +0000 UTC m=+0.123556654 container init 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:07:48 compute-0 podman[74364]: 2025-11-29 05:07:48.133018422 +0000 UTC m=+0.134555168 container start 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 05:07:48 compute-0 podman[74364]: 2025-11-29 05:07:48.136880285 +0000 UTC m=+0.138417041 container attach 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:07:48 compute-0 youthful_boyd[74381]: AQCkfyppi5ddChAAxPeZ4vZwWoNgstL3bFLnKA==
Nov 29 05:07:48 compute-0 systemd[1]: libpod-4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac.scope: Deactivated successfully.
Nov 29 05:07:48 compute-0 podman[74364]: 2025-11-29 05:07:48.18002309 +0000 UTC m=+0.181559836 container died 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ec3b6396c7dafa3c78018ae6fcf18b20083134316d0e3c697eb9b0fa2cefaba-merged.mount: Deactivated successfully.
Nov 29 05:07:48 compute-0 podman[74364]: 2025-11-29 05:07:48.227434818 +0000 UTC m=+0.228971564 container remove 4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac (image=quay.io/ceph/ceph:v18, name=youthful_boyd, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:48 compute-0 systemd[1]: libpod-conmon-4a082b3962fb546e8ccb737ef8babcf32a6d3f97c670ca9b356e918a96de48ac.scope: Deactivated successfully.
Nov 29 05:07:48 compute-0 podman[74400]: 2025-11-29 05:07:48.317360044 +0000 UTC m=+0.059489428 container create 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:07:48 compute-0 systemd[1]: Started libpod-conmon-5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5.scope.
Nov 29 05:07:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:48 compute-0 podman[74400]: 2025-11-29 05:07:48.388925191 +0000 UTC m=+0.131054645 container init 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:07:48 compute-0 podman[74400]: 2025-11-29 05:07:48.298117123 +0000 UTC m=+0.040246497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:48 compute-0 podman[74400]: 2025-11-29 05:07:48.398765727 +0000 UTC m=+0.140895131 container start 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:07:48 compute-0 podman[74400]: 2025-11-29 05:07:48.40261747 +0000 UTC m=+0.144746874 container attach 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 05:07:48 compute-0 interesting_ritchie[74416]: AQCkfyppOwUAGhAAGR45Br/xGAd+PzV1CBG2Rw==
Nov 29 05:07:48 compute-0 systemd[1]: libpod-5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5.scope: Deactivated successfully.
Nov 29 05:07:48 compute-0 podman[74400]: 2025-11-29 05:07:48.441475902 +0000 UTC m=+0.183605286 container died 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:07:48 compute-0 podman[74400]: 2025-11-29 05:07:48.478695264 +0000 UTC m=+0.220824638 container remove 5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5 (image=quay.io/ceph/ceph:v18, name=interesting_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:07:48 compute-0 systemd[1]: libpod-conmon-5652adf9db3ebc7a920f80aef929148799d38f1a7db9233eba1270dc687bede5.scope: Deactivated successfully.
Nov 29 05:07:48 compute-0 podman[74435]: 2025-11-29 05:07:48.529949874 +0000 UTC m=+0.034983930 container create 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 05:07:48 compute-0 systemd[1]: Started libpod-conmon-5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d.scope.
Nov 29 05:07:48 compute-0 podman[74435]: 2025-11-29 05:07:48.515002745 +0000 UTC m=+0.020036801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:48 compute-0 podman[74435]: 2025-11-29 05:07:48.836087537 +0000 UTC m=+0.341121623 container init 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:07:48 compute-0 podman[74435]: 2025-11-29 05:07:48.847499981 +0000 UTC m=+0.352534047 container start 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:07:48 compute-0 podman[74435]: 2025-11-29 05:07:48.85162975 +0000 UTC m=+0.356663796 container attach 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:07:48 compute-0 intelligent_ishizaka[74453]: AQCkfyppcoWXMxAASIrnLmFhI08U7xTuCVjxYw==
Nov 29 05:07:48 compute-0 systemd[1]: libpod-5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d.scope: Deactivated successfully.
Nov 29 05:07:48 compute-0 podman[74460]: 2025-11-29 05:07:48.907853529 +0000 UTC m=+0.027410129 container died 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 05:07:48 compute-0 podman[74460]: 2025-11-29 05:07:48.955472841 +0000 UTC m=+0.075029361 container remove 5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d (image=quay.io/ceph/ceph:v18, name=intelligent_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 05:07:48 compute-0 systemd[1]: libpod-conmon-5a0544c5670babee2bcd3c57855d5763297e1d239ea57597e28d10a64762da8d.scope: Deactivated successfully.
Nov 29 05:07:49 compute-0 podman[74475]: 2025-11-29 05:07:49.05301687 +0000 UTC m=+0.055475031 container create 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:07:49 compute-0 systemd[1]: Started libpod-conmon-5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d.scope.
Nov 29 05:07:49 compute-0 podman[74475]: 2025-11-29 05:07:49.024456826 +0000 UTC m=+0.026915077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e5c0cc34a574f8972b6c8f663a5b0a4dee18778790bf1392968a89a48e98efa/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:49 compute-0 podman[74475]: 2025-11-29 05:07:49.136552674 +0000 UTC m=+0.139010855 container init 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:07:49 compute-0 podman[74475]: 2025-11-29 05:07:49.143609873 +0000 UTC m=+0.146068025 container start 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:07:49 compute-0 podman[74475]: 2025-11-29 05:07:49.146849811 +0000 UTC m=+0.149307962 container attach 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:07:49 compute-0 hopeful_montalcini[74490]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 29 05:07:49 compute-0 hopeful_montalcini[74490]: setting min_mon_release = pacific
Nov 29 05:07:49 compute-0 hopeful_montalcini[74490]: /usr/bin/monmaptool: set fsid to 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:07:49 compute-0 hopeful_montalcini[74490]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 29 05:07:49 compute-0 systemd[1]: libpod-5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d.scope: Deactivated successfully.
Nov 29 05:07:49 compute-0 podman[74475]: 2025-11-29 05:07:49.185362175 +0000 UTC m=+0.187820346 container died 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e5c0cc34a574f8972b6c8f663a5b0a4dee18778790bf1392968a89a48e98efa-merged.mount: Deactivated successfully.
Nov 29 05:07:49 compute-0 podman[74475]: 2025-11-29 05:07:49.226872321 +0000 UTC m=+0.229330482 container remove 5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d (image=quay.io/ceph/ceph:v18, name=hopeful_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:49 compute-0 systemd[1]: libpod-conmon-5facd17d12eb685c42353b36bb368ce9544a51f27a21b662020c2ee811fe078d.scope: Deactivated successfully.
Nov 29 05:07:49 compute-0 podman[74509]: 2025-11-29 05:07:49.302159956 +0000 UTC m=+0.044976879 container create d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:07:49 compute-0 systemd[1]: Started libpod-conmon-d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464.scope.
Nov 29 05:07:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae74f6ab1726aa896dedb92d1fdf67139fd12bfd19395bf17c76a8d0e3d3b073/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae74f6ab1726aa896dedb92d1fdf67139fd12bfd19395bf17c76a8d0e3d3b073/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae74f6ab1726aa896dedb92d1fdf67139fd12bfd19395bf17c76a8d0e3d3b073/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae74f6ab1726aa896dedb92d1fdf67139fd12bfd19395bf17c76a8d0e3d3b073/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:49 compute-0 podman[74509]: 2025-11-29 05:07:49.376186492 +0000 UTC m=+0.119003435 container init d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 05:07:49 compute-0 podman[74509]: 2025-11-29 05:07:49.281461701 +0000 UTC m=+0.024278634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:49 compute-0 podman[74509]: 2025-11-29 05:07:49.384962012 +0000 UTC m=+0.127778925 container start d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 05:07:49 compute-0 podman[74509]: 2025-11-29 05:07:49.388852966 +0000 UTC m=+0.131669909 container attach d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 05:07:49 compute-0 systemd[1]: libpod-d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464.scope: Deactivated successfully.
Nov 29 05:07:49 compute-0 podman[74509]: 2025-11-29 05:07:49.490572056 +0000 UTC m=+0.233389009 container died d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:07:49 compute-0 podman[74509]: 2025-11-29 05:07:49.536021006 +0000 UTC m=+0.278837949 container remove d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464 (image=quay.io/ceph/ceph:v18, name=nostalgic_mclean, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:07:49 compute-0 systemd[1]: libpod-conmon-d33453ac2fa9deb144ef79b6fd9c0f631c7b3c64971732b57ff92046f845c464.scope: Deactivated successfully.
Nov 29 05:07:49 compute-0 systemd[1]: Reloading.
Nov 29 05:07:49 compute-0 systemd-sysv-generator[74592]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:07:49 compute-0 systemd-rc-local-generator[74588]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:07:49 compute-0 systemd[1]: Reloading.
Nov 29 05:07:49 compute-0 systemd-rc-local-generator[74631]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:07:49 compute-0 systemd-sysv-generator[74634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:07:50 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 05:07:50 compute-0 systemd[1]: Reloading.
Nov 29 05:07:50 compute-0 systemd-rc-local-generator[74665]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:07:50 compute-0 systemd-sysv-generator[74671]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:07:50 compute-0 systemd[1]: Reached target Ceph cluster 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:07:50 compute-0 systemd[1]: Reloading.
Nov 29 05:07:50 compute-0 systemd-rc-local-generator[74703]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:07:50 compute-0 systemd-sysv-generator[74707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:07:50 compute-0 systemd[1]: Reloading.
Nov 29 05:07:50 compute-0 systemd-rc-local-generator[74747]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:07:50 compute-0 systemd-sysv-generator[74750]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:07:50 compute-0 systemd[1]: Created slice Slice /system/ceph-93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:07:50 compute-0 systemd[1]: Reached target System Time Set.
Nov 29 05:07:50 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 29 05:07:50 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:51 compute-0 podman[74804]: 2025-11-29 05:07:51.170846731 +0000 UTC m=+0.040949843 container create 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ccfd9345522cff3c8f93e856cbc098d29bc9341e8da681f122479caebe3b5d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ccfd9345522cff3c8f93e856cbc098d29bc9341e8da681f122479caebe3b5d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ccfd9345522cff3c8f93e856cbc098d29bc9341e8da681f122479caebe3b5d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ccfd9345522cff3c8f93e856cbc098d29bc9341e8da681f122479caebe3b5d5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:51 compute-0 podman[74804]: 2025-11-29 05:07:51.233599936 +0000 UTC m=+0.103703068 container init 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:07:51 compute-0 podman[74804]: 2025-11-29 05:07:51.246287491 +0000 UTC m=+0.116390603 container start 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:07:51 compute-0 bash[74804]: 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16
Nov 29 05:07:51 compute-0 podman[74804]: 2025-11-29 05:07:51.155258127 +0000 UTC m=+0.025361249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:51 compute-0 systemd[1]: Started Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:07:51 compute-0 ceph-mon[74823]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 05:07:51 compute-0 ceph-mon[74823]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 05:07:51 compute-0 ceph-mon[74823]: pidfile_write: ignore empty --pid-file
Nov 29 05:07:51 compute-0 ceph-mon[74823]: load: jerasure load: lrc 
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: RocksDB version: 7.9.2
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Git sha 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: DB SUMMARY
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: DB Session ID:  6W04Q5N79TYXB507NAYJ
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: CURRENT file:  CURRENT
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                         Options.error_if_exists: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                       Options.create_if_missing: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                                     Options.env: 0x56082db8dc40
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                                Options.info_log: 0x56082ff78e80
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                              Options.statistics: (nil)
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                               Options.use_fsync: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                              Options.db_log_dir: 
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                                 Options.wal_dir: 
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                    Options.write_buffer_manager: 0x56082ff88b40
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                  Options.unordered_write: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                               Options.row_cache: None
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                              Options.wal_filter: None
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.two_write_queues: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.wal_compression: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.atomic_flush: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.max_background_jobs: 2
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.max_background_compactions: -1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.max_subcompactions: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                          Options.max_open_files: -1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Compression algorithms supported:
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         kZSTD supported: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         kXpressCompression supported: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         kBZip2Compression supported: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         kLZ4Compression supported: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         kZlibCompression supported: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         kSnappyCompression supported: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:           Options.merge_operator: 
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:        Options.compaction_filter: None
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56082ff78a80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56082ff711f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:          Options.compression: NoCompression
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.num_levels: 7
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e7a482e8-4a7b-461a-a1cb-36d637653226
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392871307446, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392871309430, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "6W04Q5N79TYXB507NAYJ", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392871309564, "job": 1, "event": "recovery_finished"}
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56082ff9ae00
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: DB pointer 0x560830024000
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:07:51 compute-0 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56082ff711f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 05:07:51 compute-0 ceph-mon[74823]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@-1(???) e0 preinit fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 29 05:07:51 compute-0 ceph-mon[74823]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 05:07:51 compute-0 ceph-mon[74823]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 05:07:51 compute-0 podman[74824]: 2025-11-29 05:07:51.33633784 +0000 UTC m=+0.048860793 container create 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 05:07:51 compute-0 ceph-mon[74823]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 05:07:51 compute-0 ceph-mon[74823]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 05:07:51 compute-0 ceph-mon[74823]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T05:07:49.437560Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).mds e1 new map
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 05:07:51 compute-0 ceph-mon[74823]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mkfs 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 29 05:07:51 compute-0 ceph-mon[74823]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 05:07:51 compute-0 ceph-mon[74823]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 05:07:51 compute-0 systemd[1]: Started libpod-conmon-7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef.scope.
Nov 29 05:07:51 compute-0 podman[74824]: 2025-11-29 05:07:51.312504729 +0000 UTC m=+0.025027692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d2e8077e70b0d70eed71a99df0c6f18dc45efe530acd91178f9b29bb20030b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d2e8077e70b0d70eed71a99df0c6f18dc45efe530acd91178f9b29bb20030b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d2e8077e70b0d70eed71a99df0c6f18dc45efe530acd91178f9b29bb20030b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:51 compute-0 podman[74824]: 2025-11-29 05:07:51.458121982 +0000 UTC m=+0.170644935 container init 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 05:07:51 compute-0 podman[74824]: 2025-11-29 05:07:51.471434521 +0000 UTC m=+0.183957464 container start 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:07:51 compute-0 podman[74824]: 2025-11-29 05:07:51.474914564 +0000 UTC m=+0.187437517 container attach 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 05:07:51 compute-0 ceph-mon[74823]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 05:07:51 compute-0 ceph-mon[74823]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/900733589' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:   cluster:
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:     id:     93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:     health: HEALTH_OK
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:  
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:   services:
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:     mon: 1 daemons, quorum compute-0 (age 0.561197s)
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:     mgr: no daemons active
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:     osd: 0 osds: 0 up, 0 in
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:  
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:   data:
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:     pools:   0 pools, 0 pgs
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:     objects: 0 objects, 0 B
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:     usage:   0 B used, 0 B / 0 B avail
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:     pgs:     
Nov 29 05:07:51 compute-0 goofy_ganguly[74879]:  
Nov 29 05:07:51 compute-0 systemd[1]: libpod-7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef.scope: Deactivated successfully.
Nov 29 05:07:51 compute-0 podman[74824]: 2025-11-29 05:07:51.920895433 +0000 UTC m=+0.633418416 container died 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-44d2e8077e70b0d70eed71a99df0c6f18dc45efe530acd91178f9b29bb20030b-merged.mount: Deactivated successfully.
Nov 29 05:07:51 compute-0 podman[74824]: 2025-11-29 05:07:51.961470316 +0000 UTC m=+0.673993259 container remove 7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef (image=quay.io/ceph/ceph:v18, name=goofy_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:07:51 compute-0 systemd[1]: libpod-conmon-7998f6073730aba547aadf0ef263479f9c172cce4e297b558dc8e54a468b24ef.scope: Deactivated successfully.
Nov 29 05:07:52 compute-0 podman[74915]: 2025-11-29 05:07:52.021711031 +0000 UTC m=+0.039907229 container create ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 05:07:52 compute-0 systemd[1]: Started libpod-conmon-ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e.scope.
Nov 29 05:07:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc80fad490ef13299e60902c0d030d0b1119bff624d646ca762d553f908b6aa8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc80fad490ef13299e60902c0d030d0b1119bff624d646ca762d553f908b6aa8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc80fad490ef13299e60902c0d030d0b1119bff624d646ca762d553f908b6aa8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc80fad490ef13299e60902c0d030d0b1119bff624d646ca762d553f908b6aa8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:52 compute-0 podman[74915]: 2025-11-29 05:07:52.004254682 +0000 UTC m=+0.022450860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:52 compute-0 podman[74915]: 2025-11-29 05:07:52.105694825 +0000 UTC m=+0.123891013 container init ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:07:52 compute-0 podman[74915]: 2025-11-29 05:07:52.112875927 +0000 UTC m=+0.131072105 container start ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:07:52 compute-0 podman[74915]: 2025-11-29 05:07:52.117678342 +0000 UTC m=+0.135874510 container attach ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:07:52 compute-0 ceph-mon[74823]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 05:07:52 compute-0 ceph-mon[74823]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 05:07:52 compute-0 ceph-mon[74823]: fsmap 
Nov 29 05:07:52 compute-0 ceph-mon[74823]: osdmap e1: 0 total, 0 up, 0 in
Nov 29 05:07:52 compute-0 ceph-mon[74823]: mgrmap e1: no daemons active
Nov 29 05:07:52 compute-0 ceph-mon[74823]: from='client.? 192.168.122.100:0/900733589' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 05:07:52 compute-0 ceph-mon[74823]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 05:07:52 compute-0 ceph-mon[74823]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672485794' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 05:07:52 compute-0 ceph-mon[74823]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672485794' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 05:07:52 compute-0 cool_cannon[74932]: 
Nov 29 05:07:52 compute-0 cool_cannon[74932]: [global]
Nov 29 05:07:52 compute-0 cool_cannon[74932]:         fsid = 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:07:52 compute-0 cool_cannon[74932]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 29 05:07:52 compute-0 cool_cannon[74932]:         osd_crush_chooseleaf_type = 0
Nov 29 05:07:52 compute-0 systemd[1]: libpod-ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e.scope: Deactivated successfully.
Nov 29 05:07:52 compute-0 podman[74959]: 2025-11-29 05:07:52.564416608 +0000 UTC m=+0.038624907 container died ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 05:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc80fad490ef13299e60902c0d030d0b1119bff624d646ca762d553f908b6aa8-merged.mount: Deactivated successfully.
Nov 29 05:07:52 compute-0 podman[74959]: 2025-11-29 05:07:52.604157492 +0000 UTC m=+0.078365781 container remove ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e (image=quay.io/ceph/ceph:v18, name=cool_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 05:07:52 compute-0 systemd[1]: libpod-conmon-ef2cf625096ca9068d1bb2b259e8ca403872fce0106d839c5f8bd920191f2b5e.scope: Deactivated successfully.
Nov 29 05:07:52 compute-0 podman[74974]: 2025-11-29 05:07:52.694110849 +0000 UTC m=+0.056889696 container create 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:07:52 compute-0 systemd[1]: Started libpod-conmon-0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b.scope.
Nov 29 05:07:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc85cbc1bcd7eed4c0bddf5329703465a6925ef58ac16abd4ff49e8ac7ff317/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc85cbc1bcd7eed4c0bddf5329703465a6925ef58ac16abd4ff49e8ac7ff317/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc85cbc1bcd7eed4c0bddf5329703465a6925ef58ac16abd4ff49e8ac7ff317/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddc85cbc1bcd7eed4c0bddf5329703465a6925ef58ac16abd4ff49e8ac7ff317/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:52 compute-0 podman[74974]: 2025-11-29 05:07:52.752785367 +0000 UTC m=+0.115564234 container init 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:07:52 compute-0 podman[74974]: 2025-11-29 05:07:52.761135967 +0000 UTC m=+0.123914834 container start 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:07:52 compute-0 podman[74974]: 2025-11-29 05:07:52.764818975 +0000 UTC m=+0.127597872 container attach 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 05:07:52 compute-0 podman[74974]: 2025-11-29 05:07:52.6774777 +0000 UTC m=+0.040256567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:53 compute-0 ceph-mon[74823]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:07:53 compute-0 ceph-mon[74823]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3495515744' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:07:53 compute-0 systemd[1]: libpod-0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b.scope: Deactivated successfully.
Nov 29 05:07:53 compute-0 podman[74974]: 2025-11-29 05:07:53.142072385 +0000 UTC m=+0.504851262 container died 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 05:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddc85cbc1bcd7eed4c0bddf5329703465a6925ef58ac16abd4ff49e8ac7ff317-merged.mount: Deactivated successfully.
Nov 29 05:07:53 compute-0 podman[74974]: 2025-11-29 05:07:53.175522597 +0000 UTC m=+0.538301444 container remove 0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b (image=quay.io/ceph/ceph:v18, name=pensive_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:07:53 compute-0 systemd[1]: libpod-conmon-0d5dcc22adfcc5abf6fa6ee6a64399ec8e7c0ffca7248e279d1a41930cf6886b.scope: Deactivated successfully.
Nov 29 05:07:53 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:07:53 compute-0 ceph-mon[74823]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 05:07:53 compute-0 ceph-mon[74823]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 05:07:53 compute-0 ceph-mon[74823]: mon.compute-0@0(leader) e1 shutdown
Nov 29 05:07:53 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0[74819]: 2025-11-29T05:07:53.359+0000 7f1947c8b640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 05:07:53 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0[74819]: 2025-11-29T05:07:53.359+0000 7f1947c8b640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 05:07:53 compute-0 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 05:07:53 compute-0 ceph-mon[74823]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 05:07:53 compute-0 podman[75056]: 2025-11-29 05:07:53.554248041 +0000 UTC m=+0.232094738 container died 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 05:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ccfd9345522cff3c8f93e856cbc098d29bc9341e8da681f122479caebe3b5d5-merged.mount: Deactivated successfully.
Nov 29 05:07:53 compute-0 podman[75056]: 2025-11-29 05:07:53.589331363 +0000 UTC m=+0.267178030 container remove 6e41c3709598501ae8b4db6bc10367a416dd9851b117ba853f2ee8c226028b16 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:07:53 compute-0 bash[75056]: ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0
Nov 29 05:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:53 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 05:07:53 compute-0 systemd[1]: ceph-93f82912-647c-5e78-b081-707d0a2966d8@mon.compute-0.service: Deactivated successfully.
Nov 29 05:07:53 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:07:53 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:07:53 compute-0 podman[75159]: 2025-11-29 05:07:53.921025899 +0000 UTC m=+0.036398954 container create 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899d366587e944f3c7861888775ef9538ac22c0a08d8797a13164388322d62de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899d366587e944f3c7861888775ef9538ac22c0a08d8797a13164388322d62de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899d366587e944f3c7861888775ef9538ac22c0a08d8797a13164388322d62de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/899d366587e944f3c7861888775ef9538ac22c0a08d8797a13164388322d62de/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:53 compute-0 podman[75159]: 2025-11-29 05:07:53.976993092 +0000 UTC m=+0.092366197 container init 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:07:53 compute-0 podman[75159]: 2025-11-29 05:07:53.983806625 +0000 UTC m=+0.099179690 container start 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:07:53 compute-0 bash[75159]: 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113
Nov 29 05:07:53 compute-0 podman[75159]: 2025-11-29 05:07:53.904867252 +0000 UTC m=+0.020240327 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:53 compute-0 systemd[1]: Started Ceph mon.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:07:54 compute-0 ceph-mon[75176]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 05:07:54 compute-0 ceph-mon[75176]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 05:07:54 compute-0 ceph-mon[75176]: pidfile_write: ignore empty --pid-file
Nov 29 05:07:54 compute-0 ceph-mon[75176]: load: jerasure load: lrc 
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: RocksDB version: 7.9.2
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Git sha 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: DB SUMMARY
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: DB Session ID:  HDG9CTZH3D8UGVBA5ZVT
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: CURRENT file:  CURRENT
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 52074 ; 
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                         Options.error_if_exists: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                       Options.create_if_missing: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                                     Options.env: 0x556a61cc2c40
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                                Options.info_log: 0x556a62a2f040
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                              Options.statistics: (nil)
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                               Options.use_fsync: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                              Options.db_log_dir: 
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                                 Options.wal_dir: 
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                    Options.write_buffer_manager: 0x556a62a3eb40
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                  Options.unordered_write: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                               Options.row_cache: None
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                              Options.wal_filter: None
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.two_write_queues: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.wal_compression: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.atomic_flush: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.max_background_jobs: 2
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.max_background_compactions: -1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.max_subcompactions: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                          Options.max_open_files: -1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Compression algorithms supported:
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         kZSTD supported: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         kXpressCompression supported: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         kBZip2Compression supported: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         kLZ4Compression supported: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         kZlibCompression supported: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         kSnappyCompression supported: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:           Options.merge_operator: 
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:        Options.compaction_filter: None
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556a62a2ec40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556a62a271f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:          Options.compression: NoCompression
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.num_levels: 7
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e7a482e8-4a7b-461a-a1cb-36d637653226
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392874024619, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392874027212, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 51790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 129, "table_properties": {"data_size": 50347, "index_size": 149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2940, "raw_average_key_size": 30, "raw_value_size": 48026, "raw_average_value_size": 500, "num_data_blocks": 7, "num_entries": 96, "num_filter_entries": 96, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392874, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392874027317, "job": 1, "event": "recovery_finished"}
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556a62a50e00
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: DB pointer 0x556a62ada000
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:07:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   52.47 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0   52.47 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 4.21 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 4.21 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556a62a271f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 05:07:54 compute-0 ceph-mon[75176]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@-1(???) e1 preinit fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@-1(???).mds e1 new map
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 05:07:54 compute-0 ceph-mon[75176]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 05:07:54 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 05:07:54 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 05:07:54 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 05:07:54 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 05:07:54 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 05:07:54 compute-0 podman[75177]: 2025-11-29 05:07:54.059974322 +0000 UTC m=+0.046795583 container create 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:07:54 compute-0 systemd[1]: Started libpod-conmon-7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260.scope.
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 05:07:54 compute-0 ceph-mon[75176]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 05:07:54 compute-0 ceph-mon[75176]: fsmap 
Nov 29 05:07:54 compute-0 ceph-mon[75176]: osdmap e1: 0 total, 0 up, 0 in
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mgrmap e1: no daemons active
Nov 29 05:07:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117de475ea33be6cbe60fea62320e7657a730593a3fbaaa933d15c5beb330108/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117de475ea33be6cbe60fea62320e7657a730593a3fbaaa933d15c5beb330108/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117de475ea33be6cbe60fea62320e7657a730593a3fbaaa933d15c5beb330108/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:54 compute-0 podman[75177]: 2025-11-29 05:07:54.034794958 +0000 UTC m=+0.021616239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:54 compute-0 podman[75177]: 2025-11-29 05:07:54.143434924 +0000 UTC m=+0.130256215 container init 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:07:54 compute-0 podman[75177]: 2025-11-29 05:07:54.150422642 +0000 UTC m=+0.137243903 container start 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:07:54 compute-0 podman[75177]: 2025-11-29 05:07:54.153333841 +0000 UTC m=+0.140155102 container attach 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:07:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 29 05:07:54 compute-0 systemd[1]: libpod-7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260.scope: Deactivated successfully.
Nov 29 05:07:54 compute-0 podman[75177]: 2025-11-29 05:07:54.577763952 +0000 UTC m=+0.564585213 container died 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 05:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-117de475ea33be6cbe60fea62320e7657a730593a3fbaaa933d15c5beb330108-merged.mount: Deactivated successfully.
Nov 29 05:07:54 compute-0 podman[75177]: 2025-11-29 05:07:54.630713972 +0000 UTC m=+0.617535233 container remove 7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260 (image=quay.io/ceph/ceph:v18, name=pensive_payne, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:07:54 compute-0 systemd[1]: libpod-conmon-7040152b469da57fbf045f599371ac33bdac495905e39c19348b759e1f184260.scope: Deactivated successfully.
Nov 29 05:07:54 compute-0 podman[75272]: 2025-11-29 05:07:54.697530355 +0000 UTC m=+0.045871971 container create ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:07:54 compute-0 systemd[1]: Started libpod-conmon-ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6.scope.
Nov 29 05:07:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588edd55bd9ff77c963929fced10ff7cf9eff9b0a83f4b703796c50459fcc703/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588edd55bd9ff77c963929fced10ff7cf9eff9b0a83f4b703796c50459fcc703/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/588edd55bd9ff77c963929fced10ff7cf9eff9b0a83f4b703796c50459fcc703/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:54 compute-0 podman[75272]: 2025-11-29 05:07:54.67644092 +0000 UTC m=+0.024782566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:54 compute-0 podman[75272]: 2025-11-29 05:07:54.782981205 +0000 UTC m=+0.131322871 container init ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:07:54 compute-0 podman[75272]: 2025-11-29 05:07:54.788770114 +0000 UTC m=+0.137111740 container start ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:07:54 compute-0 podman[75272]: 2025-11-29 05:07:54.792734319 +0000 UTC m=+0.141075935 container attach ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 05:07:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 29 05:07:55 compute-0 systemd[1]: libpod-ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6.scope: Deactivated successfully.
Nov 29 05:07:55 compute-0 podman[75272]: 2025-11-29 05:07:55.218481601 +0000 UTC m=+0.566823217 container died ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-588edd55bd9ff77c963929fced10ff7cf9eff9b0a83f4b703796c50459fcc703-merged.mount: Deactivated successfully.
Nov 29 05:07:55 compute-0 podman[75272]: 2025-11-29 05:07:55.262237321 +0000 UTC m=+0.610578927 container remove ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6 (image=quay.io/ceph/ceph:v18, name=clever_rubin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:07:55 compute-0 systemd[1]: libpod-conmon-ae391d396a8554f9838a9ffc7858bcbfd4b09dbddf952a1132299c1e4d4928e6.scope: Deactivated successfully.
Nov 29 05:07:55 compute-0 systemd[1]: Reloading.
Nov 29 05:07:55 compute-0 systemd-rc-local-generator[75353]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:07:55 compute-0 systemd-sysv-generator[75358]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:07:55 compute-0 systemd[1]: Reloading.
Nov 29 05:07:55 compute-0 systemd-rc-local-generator[75398]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:07:55 compute-0 systemd-sysv-generator[75402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:07:55 compute-0 systemd[1]: Starting Ceph mgr.compute-0.csskcz for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:07:56 compute-0 podman[75453]: 2025-11-29 05:07:56.066888742 +0000 UTC m=+0.048900424 container create 342af346b41939b95314e0e65e243ee8d91c2007b503527a0814b79d2ccec8d2 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc20a9aa48f08db593c59dc3348cb57140a179d04ae10da9067be2b5222068d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc20a9aa48f08db593c59dc3348cb57140a179d04ae10da9067be2b5222068d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc20a9aa48f08db593c59dc3348cb57140a179d04ae10da9067be2b5222068d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc20a9aa48f08db593c59dc3348cb57140a179d04ae10da9067be2b5222068d/merged/var/lib/ceph/mgr/ceph-compute-0.csskcz supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:56 compute-0 podman[75453]: 2025-11-29 05:07:56.116679846 +0000 UTC m=+0.098691528 container init 342af346b41939b95314e0e65e243ee8d91c2007b503527a0814b79d2ccec8d2 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:07:56 compute-0 podman[75453]: 2025-11-29 05:07:56.125473857 +0000 UTC m=+0.107485509 container start 342af346b41939b95314e0e65e243ee8d91c2007b503527a0814b79d2ccec8d2 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:07:56 compute-0 bash[75453]: 342af346b41939b95314e0e65e243ee8d91c2007b503527a0814b79d2ccec8d2
Nov 29 05:07:56 compute-0 podman[75453]: 2025-11-29 05:07:56.042426855 +0000 UTC m=+0.024438607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:56 compute-0 systemd[1]: Started Ceph mgr.compute-0.csskcz for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:07:56 compute-0 ceph-mgr[75473]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 05:07:56 compute-0 ceph-mgr[75473]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 05:07:56 compute-0 ceph-mgr[75473]: pidfile_write: ignore empty --pid-file
Nov 29 05:07:56 compute-0 podman[75474]: 2025-11-29 05:07:56.228433777 +0000 UTC m=+0.058334301 container create dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 05:07:56 compute-0 systemd[1]: Started libpod-conmon-dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde.scope.
Nov 29 05:07:56 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'alerts'
Nov 29 05:07:56 compute-0 podman[75474]: 2025-11-29 05:07:56.208337175 +0000 UTC m=+0.038237709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335705713c2cb218d59ec9f95869aa5b1eea83477190ffcc5d4cc2d0905db25/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335705713c2cb218d59ec9f95869aa5b1eea83477190ffcc5d4cc2d0905db25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4335705713c2cb218d59ec9f95869aa5b1eea83477190ffcc5d4cc2d0905db25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:56 compute-0 podman[75474]: 2025-11-29 05:07:56.336182521 +0000 UTC m=+0.166083055 container init dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:07:56 compute-0 podman[75474]: 2025-11-29 05:07:56.344657194 +0000 UTC m=+0.174557718 container start dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:07:56 compute-0 podman[75474]: 2025-11-29 05:07:56.350395592 +0000 UTC m=+0.180296096 container attach dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:07:56 compute-0 ceph-mgr[75473]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 05:07:56 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'balancer'
Nov 29 05:07:56 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:07:56.589+0000 7f55e947f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 05:07:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 05:07:56 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2066047919' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:07:56 compute-0 stoic_hugle[75514]: 
Nov 29 05:07:56 compute-0 stoic_hugle[75514]: {
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "health": {
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "status": "HEALTH_OK",
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "checks": {},
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "mutes": []
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     },
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "election_epoch": 5,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "quorum": [
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         0
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     ],
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "quorum_names": [
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "compute-0"
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     ],
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "quorum_age": 2,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "monmap": {
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "epoch": 1,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "min_mon_release_name": "reef",
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "num_mons": 1
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     },
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "osdmap": {
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "epoch": 1,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "num_osds": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "num_up_osds": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "osd_up_since": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "num_in_osds": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "osd_in_since": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "num_remapped_pgs": 0
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     },
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "pgmap": {
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "pgs_by_state": [],
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "num_pgs": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "num_pools": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "num_objects": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "data_bytes": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "bytes_used": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "bytes_avail": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "bytes_total": 0
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     },
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "fsmap": {
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "epoch": 1,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "by_rank": [],
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "up:standby": 0
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     },
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "mgrmap": {
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "available": false,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "num_standbys": 0,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "modules": [
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:             "iostat",
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:             "nfs",
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:             "restful"
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         ],
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "services": {}
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     },
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "servicemap": {
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "epoch": 1,
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:         "services": {}
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     },
Nov 29 05:07:56 compute-0 stoic_hugle[75514]:     "progress_events": {}
Nov 29 05:07:56 compute-0 stoic_hugle[75514]: }
Nov 29 05:07:56 compute-0 systemd[1]: libpod-dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde.scope: Deactivated successfully.
Nov 29 05:07:56 compute-0 podman[75540]: 2025-11-29 05:07:56.793777668 +0000 UTC m=+0.022315177 container died dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 05:07:56 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2066047919' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:07:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4335705713c2cb218d59ec9f95869aa5b1eea83477190ffcc5d4cc2d0905db25-merged.mount: Deactivated successfully.
Nov 29 05:07:56 compute-0 podman[75540]: 2025-11-29 05:07:56.838990812 +0000 UTC m=+0.067528301 container remove dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde (image=quay.io/ceph/ceph:v18, name=stoic_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:07:56 compute-0 systemd[1]: libpod-conmon-dd1db417156805f4519723447e9db115531d86b8615042db57aaafd12b2ebcde.scope: Deactivated successfully.
Nov 29 05:07:56 compute-0 ceph-mgr[75473]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 05:07:56 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'cephadm'
Nov 29 05:07:56 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:07:56.875+0000 7f55e947f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 05:07:58 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'crash'
Nov 29 05:07:58 compute-0 podman[75565]: 2025-11-29 05:07:58.909441166 +0000 UTC m=+0.037027269 container create a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:07:58 compute-0 systemd[1]: Started libpod-conmon-a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c.scope.
Nov 29 05:07:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187074d9041ceeef0ded8715f6361129fd9a5b50c4bf6f63b5a7ca0fe5d06d3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187074d9041ceeef0ded8715f6361129fd9a5b50c4bf6f63b5a7ca0fe5d06d3c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187074d9041ceeef0ded8715f6361129fd9a5b50c4bf6f63b5a7ca0fe5d06d3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:07:58 compute-0 podman[75565]: 2025-11-29 05:07:58.978613065 +0000 UTC m=+0.106199188 container init a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:07:58 compute-0 podman[75565]: 2025-11-29 05:07:58.983899262 +0000 UTC m=+0.111485365 container start a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:07:58 compute-0 podman[75565]: 2025-11-29 05:07:58.98673373 +0000 UTC m=+0.114319853 container attach a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:07:58 compute-0 podman[75565]: 2025-11-29 05:07:58.893343089 +0000 UTC m=+0.020929212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:07:59 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:07:59.066+0000 7f55e947f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 05:07:59 compute-0 ceph-mgr[75473]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 05:07:59 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'dashboard'
Nov 29 05:07:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 05:07:59 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1812151667' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:07:59 compute-0 competent_mendel[75581]: 
Nov 29 05:07:59 compute-0 competent_mendel[75581]: {
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "health": {
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "status": "HEALTH_OK",
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "checks": {},
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "mutes": []
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     },
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "election_epoch": 5,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "quorum": [
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         0
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     ],
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "quorum_names": [
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "compute-0"
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     ],
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "quorum_age": 5,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "monmap": {
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "epoch": 1,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "min_mon_release_name": "reef",
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "num_mons": 1
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     },
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "osdmap": {
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "epoch": 1,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "num_osds": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "num_up_osds": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "osd_up_since": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "num_in_osds": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "osd_in_since": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "num_remapped_pgs": 0
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     },
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "pgmap": {
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "pgs_by_state": [],
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "num_pgs": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "num_pools": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "num_objects": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "data_bytes": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "bytes_used": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "bytes_avail": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "bytes_total": 0
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     },
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "fsmap": {
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "epoch": 1,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "by_rank": [],
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "up:standby": 0
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     },
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "mgrmap": {
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "available": false,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "num_standbys": 0,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "modules": [
Nov 29 05:07:59 compute-0 competent_mendel[75581]:             "iostat",
Nov 29 05:07:59 compute-0 competent_mendel[75581]:             "nfs",
Nov 29 05:07:59 compute-0 competent_mendel[75581]:             "restful"
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         ],
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "services": {}
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     },
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "servicemap": {
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "epoch": 1,
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 05:07:59 compute-0 competent_mendel[75581]:         "services": {}
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     },
Nov 29 05:07:59 compute-0 competent_mendel[75581]:     "progress_events": {}
Nov 29 05:07:59 compute-0 competent_mendel[75581]: }
Nov 29 05:07:59 compute-0 systemd[1]: libpod-a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c.scope: Deactivated successfully.
Nov 29 05:07:59 compute-0 podman[75565]: 2025-11-29 05:07:59.366498229 +0000 UTC m=+0.494084342 container died a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 05:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-187074d9041ceeef0ded8715f6361129fd9a5b50c4bf6f63b5a7ca0fe5d06d3c-merged.mount: Deactivated successfully.
Nov 29 05:07:59 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1812151667' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:07:59 compute-0 podman[75565]: 2025-11-29 05:07:59.410889104 +0000 UTC m=+0.538475207 container remove a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c (image=quay.io/ceph/ceph:v18, name=competent_mendel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 05:07:59 compute-0 systemd[1]: libpod-conmon-a432ea5e16574e8b910cf9474e0fc1821af1b4758631746fc19d30f8018baa6c.scope: Deactivated successfully.
Nov 29 05:08:00 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'devicehealth'
Nov 29 05:08:00 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:00.682+0000 7f55e947f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 05:08:00 compute-0 ceph-mgr[75473]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 05:08:00 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 05:08:01 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 05:08:01 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 05:08:01 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]:   from numpy import show_config as show_numpy_config
Nov 29 05:08:01 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:01.191+0000 7f55e947f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 05:08:01 compute-0 ceph-mgr[75473]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 05:08:01 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'influx'
Nov 29 05:08:01 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:01.439+0000 7f55e947f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 05:08:01 compute-0 ceph-mgr[75473]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 05:08:01 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'insights'
Nov 29 05:08:01 compute-0 podman[75621]: 2025-11-29 05:08:01.488015058 +0000 UTC m=+0.053434963 container create abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:08:01 compute-0 systemd[1]: Started libpod-conmon-abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a.scope.
Nov 29 05:08:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a42eb62d72a5d4275cff3c00ba2409970c52401f086032a9870423d1703017c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a42eb62d72a5d4275cff3c00ba2409970c52401f086032a9870423d1703017c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a42eb62d72a5d4275cff3c00ba2409970c52401f086032a9870423d1703017c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:01 compute-0 podman[75621]: 2025-11-29 05:08:01.562757481 +0000 UTC m=+0.128177386 container init abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 05:08:01 compute-0 podman[75621]: 2025-11-29 05:08:01.466876391 +0000 UTC m=+0.032296316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:01 compute-0 podman[75621]: 2025-11-29 05:08:01.571616003 +0000 UTC m=+0.137035918 container start abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 05:08:01 compute-0 podman[75621]: 2025-11-29 05:08:01.575434335 +0000 UTC m=+0.140854260 container attach abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:01 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'iostat'
Nov 29 05:08:01 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:01.917+0000 7f55e947f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 05:08:01 compute-0 ceph-mgr[75473]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 05:08:01 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'k8sevents'
Nov 29 05:08:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 05:08:01 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1785862208' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]: 
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]: {
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "health": {
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "status": "HEALTH_OK",
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "checks": {},
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "mutes": []
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     },
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "election_epoch": 5,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "quorum": [
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         0
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     ],
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "quorum_names": [
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "compute-0"
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     ],
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "quorum_age": 7,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "monmap": {
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "epoch": 1,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "min_mon_release_name": "reef",
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "num_mons": 1
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     },
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "osdmap": {
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "epoch": 1,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "num_osds": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "num_up_osds": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "osd_up_since": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "num_in_osds": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "osd_in_since": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "num_remapped_pgs": 0
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     },
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "pgmap": {
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "pgs_by_state": [],
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "num_pgs": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "num_pools": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "num_objects": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "data_bytes": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "bytes_used": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "bytes_avail": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "bytes_total": 0
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     },
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "fsmap": {
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "epoch": 1,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "by_rank": [],
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "up:standby": 0
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     },
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "mgrmap": {
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "available": false,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "num_standbys": 0,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "modules": [
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:             "iostat",
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:             "nfs",
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:             "restful"
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         ],
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "services": {}
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     },
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "servicemap": {
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "epoch": 1,
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:         "services": {}
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     },
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]:     "progress_events": {}
Nov 29 05:08:01 compute-0 goofy_hamilton[75638]: }
Nov 29 05:08:01 compute-0 systemd[1]: libpod-abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a.scope: Deactivated successfully.
Nov 29 05:08:01 compute-0 podman[75621]: 2025-11-29 05:08:01.987098389 +0000 UTC m=+0.552518334 container died abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a42eb62d72a5d4275cff3c00ba2409970c52401f086032a9870423d1703017c-merged.mount: Deactivated successfully.
Nov 29 05:08:02 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1785862208' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:02 compute-0 podman[75621]: 2025-11-29 05:08:02.031656488 +0000 UTC m=+0.597076403 container remove abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a (image=quay.io/ceph/ceph:v18, name=goofy_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:08:02 compute-0 systemd[1]: libpod-conmon-abb6676e061b887f0446ad421267f3c750ff329974fe42ab11e85a473d03b15a.scope: Deactivated successfully.
Nov 29 05:08:03 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'localpool'
Nov 29 05:08:03 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 05:08:04 compute-0 podman[75678]: 2025-11-29 05:08:04.099300555 +0000 UTC m=+0.043427083 container create dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:08:04 compute-0 systemd[1]: Started libpod-conmon-dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619.scope.
Nov 29 05:08:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1728d2b0479ee4d2227dd5185ec406ac57bb380ddda4127b038fa22cb0966d23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1728d2b0479ee4d2227dd5185ec406ac57bb380ddda4127b038fa22cb0966d23/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1728d2b0479ee4d2227dd5185ec406ac57bb380ddda4127b038fa22cb0966d23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:04 compute-0 podman[75678]: 2025-11-29 05:08:04.076313173 +0000 UTC m=+0.020439731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:04 compute-0 podman[75678]: 2025-11-29 05:08:04.176237531 +0000 UTC m=+0.120364129 container init dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 05:08:04 compute-0 podman[75678]: 2025-11-29 05:08:04.183123576 +0000 UTC m=+0.127250104 container start dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:08:04 compute-0 podman[75678]: 2025-11-29 05:08:04.186757862 +0000 UTC m=+0.130884480 container attach dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:08:04 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'mirroring'
Nov 29 05:08:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 05:08:04 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502031567' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]: 
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]: {
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "health": {
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "status": "HEALTH_OK",
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "checks": {},
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "mutes": []
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     },
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "election_epoch": 5,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "quorum": [
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         0
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     ],
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "quorum_names": [
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "compute-0"
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     ],
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "quorum_age": 10,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "monmap": {
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "epoch": 1,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "min_mon_release_name": "reef",
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "num_mons": 1
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     },
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "osdmap": {
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "epoch": 1,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "num_osds": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "num_up_osds": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "osd_up_since": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "num_in_osds": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "osd_in_since": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "num_remapped_pgs": 0
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     },
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "pgmap": {
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "pgs_by_state": [],
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "num_pgs": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "num_pools": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "num_objects": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "data_bytes": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "bytes_used": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "bytes_avail": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "bytes_total": 0
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     },
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "fsmap": {
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "epoch": 1,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "by_rank": [],
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "up:standby": 0
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     },
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "mgrmap": {
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "available": false,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "num_standbys": 0,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "modules": [
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:             "iostat",
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:             "nfs",
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:             "restful"
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         ],
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "services": {}
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     },
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "servicemap": {
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "epoch": 1,
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:         "services": {}
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     },
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]:     "progress_events": {}
Nov 29 05:08:04 compute-0 crazy_lovelace[75695]: }
Nov 29 05:08:04 compute-0 systemd[1]: libpod-dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619.scope: Deactivated successfully.
Nov 29 05:08:04 compute-0 podman[75678]: 2025-11-29 05:08:04.579047382 +0000 UTC m=+0.523173910 container died dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:08:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1728d2b0479ee4d2227dd5185ec406ac57bb380ddda4127b038fa22cb0966d23-merged.mount: Deactivated successfully.
Nov 29 05:08:04 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1502031567' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:04 compute-0 podman[75678]: 2025-11-29 05:08:04.618072718 +0000 UTC m=+0.562199246 container remove dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619 (image=quay.io/ceph/ceph:v18, name=crazy_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 05:08:04 compute-0 systemd[1]: libpod-conmon-dffe5fccd7e652ee29c1a3a8f06f801f26061c7fd677deabf5a2e950e8835619.scope: Deactivated successfully.
Nov 29 05:08:04 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'nfs'
Nov 29 05:08:05 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:05.519+0000 7f55e947f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 05:08:05 compute-0 ceph-mgr[75473]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 05:08:05 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'orchestrator'
Nov 29 05:08:06 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:06.226+0000 7f55e947f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 05:08:06 compute-0 ceph-mgr[75473]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 05:08:06 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 05:08:06 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:06.493+0000 7f55e947f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 05:08:06 compute-0 ceph-mgr[75473]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 05:08:06 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'osd_support'
Nov 29 05:08:06 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:06.719+0000 7f55e947f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 05:08:06 compute-0 ceph-mgr[75473]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 05:08:06 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 05:08:06 compute-0 podman[75735]: 2025-11-29 05:08:06.665803758 +0000 UTC m=+0.022225645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:06 compute-0 podman[75735]: 2025-11-29 05:08:06.769606117 +0000 UTC m=+0.126027994 container create 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:08:06 compute-0 systemd[1]: Started libpod-conmon-27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7.scope.
Nov 29 05:08:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823a8a482773534b946d3d92e230f44e2a3589f8f50272bc894c04fe49f9d600/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823a8a482773534b946d3d92e230f44e2a3589f8f50272bc894c04fe49f9d600/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/823a8a482773534b946d3d92e230f44e2a3589f8f50272bc894c04fe49f9d600/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:06 compute-0 podman[75735]: 2025-11-29 05:08:06.831467951 +0000 UTC m=+0.187889868 container init 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:08:06 compute-0 podman[75735]: 2025-11-29 05:08:06.838191653 +0000 UTC m=+0.194613530 container start 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:08:06 compute-0 podman[75735]: 2025-11-29 05:08:06.84224914 +0000 UTC m=+0.198671017 container attach 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:08:07 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:07.014+0000 7f55e947f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 05:08:07 compute-0 ceph-mgr[75473]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 05:08:07 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'progress'
Nov 29 05:08:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 05:08:07 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2971488717' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]: 
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]: {
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "health": {
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "status": "HEALTH_OK",
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "checks": {},
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "mutes": []
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     },
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "election_epoch": 5,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "quorum": [
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         0
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     ],
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "quorum_names": [
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "compute-0"
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     ],
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "quorum_age": 13,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "monmap": {
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "epoch": 1,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "min_mon_release_name": "reef",
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "num_mons": 1
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     },
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "osdmap": {
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "epoch": 1,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "num_osds": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "num_up_osds": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "osd_up_since": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "num_in_osds": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "osd_in_since": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "num_remapped_pgs": 0
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     },
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "pgmap": {
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "pgs_by_state": [],
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "num_pgs": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "num_pools": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "num_objects": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "data_bytes": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "bytes_used": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "bytes_avail": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "bytes_total": 0
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     },
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "fsmap": {
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "epoch": 1,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "by_rank": [],
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "up:standby": 0
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     },
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "mgrmap": {
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "available": false,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "num_standbys": 0,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "modules": [
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:             "iostat",
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:             "nfs",
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:             "restful"
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         ],
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "services": {}
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     },
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "servicemap": {
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "epoch": 1,
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:         "services": {}
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     },
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]:     "progress_events": {}
Nov 29 05:08:07 compute-0 optimistic_hypatia[75751]: }
Nov 29 05:08:07 compute-0 systemd[1]: libpod-27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7.scope: Deactivated successfully.
Nov 29 05:08:07 compute-0 podman[75735]: 2025-11-29 05:08:07.242597763 +0000 UTC m=+0.599019630 container died 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:08:07 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:07.262+0000 7f55e947f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 05:08:07 compute-0 ceph-mgr[75473]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 05:08:07 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'prometheus'
Nov 29 05:08:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-823a8a482773534b946d3d92e230f44e2a3589f8f50272bc894c04fe49f9d600-merged.mount: Deactivated successfully.
Nov 29 05:08:07 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2971488717' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:07 compute-0 podman[75735]: 2025-11-29 05:08:07.289493798 +0000 UTC m=+0.645915665 container remove 27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7 (image=quay.io/ceph/ceph:v18, name=optimistic_hypatia, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:08:07 compute-0 systemd[1]: libpod-conmon-27a13edb760710af5160e095999f232412b09bbfcf1f97380f8aaf93f6c8d0b7.scope: Deactivated successfully.
Nov 29 05:08:08 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:08.236+0000 7f55e947f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 05:08:08 compute-0 ceph-mgr[75473]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 05:08:08 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rbd_support'
Nov 29 05:08:08 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:08.516+0000 7f55e947f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 05:08:08 compute-0 ceph-mgr[75473]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 05:08:08 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'restful'
Nov 29 05:08:09 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rgw'
Nov 29 05:08:09 compute-0 podman[75789]: 2025-11-29 05:08:09.332591755 +0000 UTC m=+0.019975980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:09 compute-0 podman[75789]: 2025-11-29 05:08:09.531434355 +0000 UTC m=+0.218818540 container create fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 05:08:09 compute-0 systemd[1]: Started libpod-conmon-fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b.scope.
Nov 29 05:08:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32081d447ec0cfbf1f4f87cb46ae3fd30c62876ce2fd0e3fedc5987401b93ce2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32081d447ec0cfbf1f4f87cb46ae3fd30c62876ce2fd0e3fedc5987401b93ce2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32081d447ec0cfbf1f4f87cb46ae3fd30c62876ce2fd0e3fedc5987401b93ce2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:09 compute-0 podman[75789]: 2025-11-29 05:08:09.892573887 +0000 UTC m=+0.579958112 container init fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:08:09 compute-0 podman[75789]: 2025-11-29 05:08:09.900434966 +0000 UTC m=+0.587819171 container start fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:08:09 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:09.901+0000 7f55e947f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 05:08:09 compute-0 ceph-mgr[75473]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 05:08:09 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rook'
Nov 29 05:08:09 compute-0 podman[75789]: 2025-11-29 05:08:09.904513584 +0000 UTC m=+0.591897799 container attach fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 05:08:10 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400864301' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:10 compute-0 wizardly_germain[75805]: 
Nov 29 05:08:10 compute-0 wizardly_germain[75805]: {
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "health": {
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "status": "HEALTH_OK",
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "checks": {},
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "mutes": []
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     },
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "election_epoch": 5,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "quorum": [
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         0
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     ],
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "quorum_names": [
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "compute-0"
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     ],
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "quorum_age": 16,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "monmap": {
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "epoch": 1,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "min_mon_release_name": "reef",
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "num_mons": 1
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     },
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "osdmap": {
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "epoch": 1,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "num_osds": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "num_up_osds": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "osd_up_since": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "num_in_osds": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "osd_in_since": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "num_remapped_pgs": 0
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     },
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "pgmap": {
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "pgs_by_state": [],
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "num_pgs": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "num_pools": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "num_objects": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "data_bytes": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "bytes_used": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "bytes_avail": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "bytes_total": 0
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     },
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "fsmap": {
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "epoch": 1,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "by_rank": [],
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "up:standby": 0
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     },
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "mgrmap": {
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "available": false,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "num_standbys": 0,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "modules": [
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:             "iostat",
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:             "nfs",
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:             "restful"
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         ],
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "services": {}
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     },
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "servicemap": {
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "epoch": 1,
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:         "services": {}
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     },
Nov 29 05:08:10 compute-0 wizardly_germain[75805]:     "progress_events": {}
Nov 29 05:08:10 compute-0 wizardly_germain[75805]: }
Nov 29 05:08:10 compute-0 systemd[1]: libpod-fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b.scope: Deactivated successfully.
Nov 29 05:08:10 compute-0 podman[75789]: 2025-11-29 05:08:10.295573834 +0000 UTC m=+0.982958069 container died fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:10 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1400864301' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-32081d447ec0cfbf1f4f87cb46ae3fd30c62876ce2fd0e3fedc5987401b93ce2-merged.mount: Deactivated successfully.
Nov 29 05:08:10 compute-0 podman[75789]: 2025-11-29 05:08:10.365588344 +0000 UTC m=+1.052972579 container remove fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b (image=quay.io/ceph/ceph:v18, name=wizardly_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:08:10 compute-0 systemd[1]: libpod-conmon-fd84d67be19c2dc8f5500f2ebe48028452b5ddbd20e0e22c3bc71a41ce27086b.scope: Deactivated successfully.
Nov 29 05:08:11 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:11.956+0000 7f55e947f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 05:08:11 compute-0 ceph-mgr[75473]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 05:08:11 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'selftest'
Nov 29 05:08:12 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:12.206+0000 7f55e947f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 05:08:12 compute-0 ceph-mgr[75473]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 05:08:12 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'snap_schedule'
Nov 29 05:08:12 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:12.453+0000 7f55e947f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 05:08:12 compute-0 ceph-mgr[75473]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 05:08:12 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'stats'
Nov 29 05:08:12 compute-0 podman[75844]: 2025-11-29 05:08:12.435216958 +0000 UTC m=+0.034831957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:12 compute-0 podman[75844]: 2025-11-29 05:08:12.650483212 +0000 UTC m=+0.250098131 container create a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 05:08:12 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'status'
Nov 29 05:08:12 compute-0 systemd[1]: Started libpod-conmon-a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554.scope.
Nov 29 05:08:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e7d0e06ac9419ecb3b43c88448da7b8776b6f835d4c8df43b4dde07a4442c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e7d0e06ac9419ecb3b43c88448da7b8776b6f835d4c8df43b4dde07a4442c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29e7d0e06ac9419ecb3b43c88448da7b8776b6f835d4c8df43b4dde07a4442c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:12 compute-0 podman[75844]: 2025-11-29 05:08:12.74005226 +0000 UTC m=+0.339667189 container init a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:12 compute-0 podman[75844]: 2025-11-29 05:08:12.745666605 +0000 UTC m=+0.345281534 container start a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:08:12 compute-0 podman[75844]: 2025-11-29 05:08:12.750013189 +0000 UTC m=+0.349628138 container attach a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:08:12 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:12.956+0000 7f55e947f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 05:08:12 compute-0 ceph-mgr[75473]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 05:08:12 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'telegraf'
Nov 29 05:08:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 05:08:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929787841' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]: 
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]: {
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "health": {
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "status": "HEALTH_OK",
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "checks": {},
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "mutes": []
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     },
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "election_epoch": 5,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "quorum": [
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         0
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     ],
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "quorum_names": [
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "compute-0"
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     ],
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "quorum_age": 19,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "monmap": {
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "epoch": 1,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "min_mon_release_name": "reef",
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "num_mons": 1
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     },
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "osdmap": {
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "epoch": 1,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "num_osds": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "num_up_osds": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "osd_up_since": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "num_in_osds": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "osd_in_since": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "num_remapped_pgs": 0
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     },
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "pgmap": {
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "pgs_by_state": [],
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "num_pgs": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "num_pools": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "num_objects": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "data_bytes": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "bytes_used": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "bytes_avail": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "bytes_total": 0
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     },
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "fsmap": {
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "epoch": 1,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "by_rank": [],
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "up:standby": 0
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     },
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "mgrmap": {
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "available": false,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "num_standbys": 0,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "modules": [
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:             "iostat",
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:             "nfs",
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:             "restful"
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         ],
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "services": {}
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     },
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "servicemap": {
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "epoch": 1,
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:         "services": {}
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     },
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]:     "progress_events": {}
Nov 29 05:08:13 compute-0 vigorous_almeida[75861]: }
Nov 29 05:08:13 compute-0 systemd[1]: libpod-a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554.scope: Deactivated successfully.
Nov 29 05:08:13 compute-0 podman[75844]: 2025-11-29 05:08:13.156082249 +0000 UTC m=+0.755697248 container died a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 05:08:13 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1929787841' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:13 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:13.193+0000 7f55e947f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 05:08:13 compute-0 ceph-mgr[75473]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 05:08:13 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'telemetry'
Nov 29 05:08:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-f29e7d0e06ac9419ecb3b43c88448da7b8776b6f835d4c8df43b4dde07a4442c-merged.mount: Deactivated successfully.
Nov 29 05:08:13 compute-0 podman[75844]: 2025-11-29 05:08:13.228510917 +0000 UTC m=+0.828125876 container remove a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554 (image=quay.io/ceph/ceph:v18, name=vigorous_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 05:08:13 compute-0 systemd[1]: libpod-conmon-a152716623d2fc8d07b5f394e7697c44badbf5a898da69bd33edcffe4c2a7554.scope: Deactivated successfully.
Nov 29 05:08:13 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:13.780+0000 7f55e947f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 05:08:13 compute-0 ceph-mgr[75473]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 05:08:13 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 05:08:14 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:14.434+0000 7f55e947f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 05:08:14 compute-0 ceph-mgr[75473]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 05:08:14 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'volumes'
Nov 29 05:08:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:15.143+0000 7f55e947f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'zabbix'
Nov 29 05:08:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:15.378+0000 7f55e947f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: ms_deliver_dispatch: unhandled message 0x562d8b79f1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 05:08:15 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.csskcz
Nov 29 05:08:15 compute-0 podman[75900]: 2025-11-29 05:08:15.303040099 +0000 UTC m=+0.038412083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:15 compute-0 podman[75900]: 2025-11-29 05:08:15.948388088 +0000 UTC m=+0.683760062 container create 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr handle_mgr_map Activating!
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr handle_mgr_map I am now activating
Nov 29 05:08:15 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.csskcz(active, starting, since 0.573948s)
Nov 29 05:08:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 05:08:15 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 05:08:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 05:08:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 05:08:15 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 05:08:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 05:08:15 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 05:08:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 05:08:15 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 05:08:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.csskcz", "id": "compute-0.csskcz"} v 0) v1
Nov 29 05:08:15 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr metadata", "who": "compute-0.csskcz", "id": "compute-0.csskcz"}]: dispatch
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: balancer
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Manager daemon compute-0.csskcz is now available
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: crash
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [balancer INFO root] Starting
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: devicehealth
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Starting
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: iostat
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:08:15
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [balancer INFO root] No pools available
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: nfs
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: orchestrator
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: pg_autoscaler
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: progress
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 systemd[1]: Started libpod-conmon-9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06.scope.
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [progress INFO root] Loading...
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [progress INFO root] No stored events to load
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [progress INFO root] Loaded [] historic events
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [rbd_support INFO root] recovery thread starting
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [rbd_support INFO root] starting setup
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: rbd_support
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: restful
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: status
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"} v 0) v1
Nov 29 05:08:15 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"}]: dispatch
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: telemetry
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [restful WARNING root] server not running: no certificate configured
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [rbd_support INFO root] PerfHandler: starting
Nov 29 05:08:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TaskHandler: starting
Nov 29 05:08:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"} v 0) v1
Nov 29 05:08:15 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"}]: dispatch
Nov 29 05:08:15 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:08:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 05:08:15 compute-0 ceph-mgr[75473]: [rbd_support INFO root] setup complete
Nov 29 05:08:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 29 05:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbd8f48682686892f9980c0866b729ab114eaf4676af915bd9f5f169871839b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbd8f48682686892f9980c0866b729ab114eaf4676af915bd9f5f169871839b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbd8f48682686892f9980c0866b729ab114eaf4676af915bd9f5f169871839b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:16 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: volumes
Nov 29 05:08:16 compute-0 ceph-mon[75176]: Activating manager daemon compute-0.csskcz
Nov 29 05:08:16 compute-0 ceph-mon[75176]: mgrmap e2: compute-0.csskcz(active, starting, since 0.573948s)
Nov 29 05:08:16 compute-0 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 05:08:16 compute-0 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 05:08:16 compute-0 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 05:08:16 compute-0 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 05:08:16 compute-0 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr metadata", "who": "compute-0.csskcz", "id": "compute-0.csskcz"}]: dispatch
Nov 29 05:08:16 compute-0 ceph-mon[75176]: Manager daemon compute-0.csskcz is now available
Nov 29 05:08:16 compute-0 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"}]: dispatch
Nov 29 05:08:16 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:16 compute-0 podman[75900]: 2025-11-29 05:08:16.015346304 +0000 UTC m=+0.750718308 container init 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 29 05:08:16 compute-0 podman[75900]: 2025-11-29 05:08:16.022106896 +0000 UTC m=+0.757478860 container start 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:08:16 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:16 compute-0 podman[75900]: 2025-11-29 05:08:16.025384075 +0000 UTC m=+0.760756039 container attach 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:08:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 05:08:16 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357198134' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]: 
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]: {
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "health": {
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "status": "HEALTH_OK",
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "checks": {},
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "mutes": []
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     },
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "election_epoch": 5,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "quorum": [
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         0
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     ],
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "quorum_names": [
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "compute-0"
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     ],
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "quorum_age": 22,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "monmap": {
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "epoch": 1,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "min_mon_release_name": "reef",
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "num_mons": 1
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     },
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "osdmap": {
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "epoch": 1,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "num_osds": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "num_up_osds": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "osd_up_since": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "num_in_osds": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "osd_in_since": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "num_remapped_pgs": 0
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     },
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "pgmap": {
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "pgs_by_state": [],
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "num_pgs": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "num_pools": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "num_objects": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "data_bytes": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "bytes_used": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "bytes_avail": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "bytes_total": 0
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     },
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "fsmap": {
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "epoch": 1,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "by_rank": [],
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "up:standby": 0
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     },
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "mgrmap": {
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "available": false,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "num_standbys": 0,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "modules": [
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:             "iostat",
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:             "nfs",
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:             "restful"
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         ],
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "services": {}
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     },
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "servicemap": {
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "epoch": 1,
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:         "services": {}
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     },
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]:     "progress_events": {}
Nov 29 05:08:16 compute-0 blissful_archimedes[75947]: }
Nov 29 05:08:16 compute-0 systemd[1]: libpod-9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06.scope: Deactivated successfully.
Nov 29 05:08:16 compute-0 podman[75900]: 2025-11-29 05:08:16.423920005 +0000 UTC m=+1.159292049 container died 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 05:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dbd8f48682686892f9980c0866b729ab114eaf4676af915bd9f5f169871839b-merged.mount: Deactivated successfully.
Nov 29 05:08:16 compute-0 podman[75900]: 2025-11-29 05:08:16.470922842 +0000 UTC m=+1.206294816 container remove 9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06 (image=quay.io/ceph/ceph:v18, name=blissful_archimedes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 05:08:16 compute-0 systemd[1]: libpod-conmon-9b8bee2ae105e6140a31350294af385e1c3a99810b4886d365ade8c64f68be06.scope: Deactivated successfully.
Nov 29 05:08:16 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.csskcz(active, since 1.60234s)
Nov 29 05:08:17 compute-0 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"}]: dispatch
Nov 29 05:08:17 compute-0 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:17 compute-0 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:17 compute-0 ceph-mon[75176]: from='mgr.14102 192.168.122.100:0/972054641' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:17 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3357198134' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:17 compute-0 ceph-mon[75176]: mgrmap e3: compute-0.csskcz(active, since 1.60234s)
Nov 29 05:08:17 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 05:08:18 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.csskcz(active, since 2s)
Nov 29 05:08:18 compute-0 podman[76035]: 2025-11-29 05:08:18.533424033 +0000 UTC m=+0.039260914 container create 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:08:18 compute-0 systemd[1]: Started libpod-conmon-8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197.scope.
Nov 29 05:08:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da4afed94a6b2361c91b56c110c594ab1e6df028f85cee41710380fa12c8406/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da4afed94a6b2361c91b56c110c594ab1e6df028f85cee41710380fa12c8406/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da4afed94a6b2361c91b56c110c594ab1e6df028f85cee41710380fa12c8406/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:18 compute-0 podman[76035]: 2025-11-29 05:08:18.599735772 +0000 UTC m=+0.105572653 container init 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:18 compute-0 podman[76035]: 2025-11-29 05:08:18.604530547 +0000 UTC m=+0.110367418 container start 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 05:08:18 compute-0 podman[76035]: 2025-11-29 05:08:18.607608215 +0000 UTC m=+0.113445096 container attach 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:08:18 compute-0 podman[76035]: 2025-11-29 05:08:18.518066666 +0000 UTC m=+0.023903577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:19 compute-0 ceph-mon[75176]: mgrmap e4: compute-0.csskcz(active, since 2s)
Nov 29 05:08:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 05:08:19 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2118182639' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:19 compute-0 zealous_elion[76052]: 
Nov 29 05:08:19 compute-0 zealous_elion[76052]: {
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "health": {
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "status": "HEALTH_OK",
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "checks": {},
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "mutes": []
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     },
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "election_epoch": 5,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "quorum": [
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         0
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     ],
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "quorum_names": [
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "compute-0"
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     ],
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "quorum_age": 25,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "monmap": {
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "epoch": 1,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "min_mon_release_name": "reef",
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "num_mons": 1
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     },
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "osdmap": {
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "epoch": 1,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "num_osds": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "num_up_osds": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "osd_up_since": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "num_in_osds": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "osd_in_since": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "num_remapped_pgs": 0
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     },
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "pgmap": {
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "pgs_by_state": [],
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "num_pgs": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "num_pools": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "num_objects": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "data_bytes": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "bytes_used": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "bytes_avail": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "bytes_total": 0
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     },
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "fsmap": {
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "epoch": 1,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "by_rank": [],
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "up:standby": 0
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     },
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "mgrmap": {
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "available": true,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "num_standbys": 0,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "modules": [
Nov 29 05:08:19 compute-0 zealous_elion[76052]:             "iostat",
Nov 29 05:08:19 compute-0 zealous_elion[76052]:             "nfs",
Nov 29 05:08:19 compute-0 zealous_elion[76052]:             "restful"
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         ],
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "services": {}
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     },
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "servicemap": {
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "epoch": 1,
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "modified": "2025-11-29T05:07:51.349368+0000",
Nov 29 05:08:19 compute-0 zealous_elion[76052]:         "services": {}
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     },
Nov 29 05:08:19 compute-0 zealous_elion[76052]:     "progress_events": {}
Nov 29 05:08:19 compute-0 zealous_elion[76052]: }
Nov 29 05:08:19 compute-0 systemd[1]: libpod-8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197.scope: Deactivated successfully.
Nov 29 05:08:19 compute-0 podman[76035]: 2025-11-29 05:08:19.211566898 +0000 UTC m=+0.717403779 container died 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 29 05:08:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6da4afed94a6b2361c91b56c110c594ab1e6df028f85cee41710380fa12c8406-merged.mount: Deactivated successfully.
Nov 29 05:08:19 compute-0 podman[76035]: 2025-11-29 05:08:19.257157051 +0000 UTC m=+0.762993932 container remove 8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197 (image=quay.io/ceph/ceph:v18, name=zealous_elion, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:08:19 compute-0 systemd[1]: libpod-conmon-8104598181a6d82db6f5e73f6a89ef856cb084215bdc880a255b020b5d005197.scope: Deactivated successfully.
Nov 29 05:08:19 compute-0 podman[76089]: 2025-11-29 05:08:19.322076869 +0000 UTC m=+0.047921035 container create 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 05:08:19 compute-0 systemd[1]: Started libpod-conmon-7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478.scope.
Nov 29 05:08:19 compute-0 podman[76089]: 2025-11-29 05:08:19.295510265 +0000 UTC m=+0.021354491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c354a69dcef5fcec12e8dc389644bc4d2fe81ff77311aa302eb7bac97dc670/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c354a69dcef5fcec12e8dc389644bc4d2fe81ff77311aa302eb7bac97dc670/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c354a69dcef5fcec12e8dc389644bc4d2fe81ff77311aa302eb7bac97dc670/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c354a69dcef5fcec12e8dc389644bc4d2fe81ff77311aa302eb7bac97dc670/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:19 compute-0 podman[76089]: 2025-11-29 05:08:19.423309505 +0000 UTC m=+0.149153731 container init 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:19 compute-0 podman[76089]: 2025-11-29 05:08:19.434127853 +0000 UTC m=+0.159971989 container start 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 05:08:19 compute-0 podman[76089]: 2025-11-29 05:08:19.437528288 +0000 UTC m=+0.163372414 container attach 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:19 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 05:08:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 05:08:19 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2974597965' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 05:08:19 compute-0 systemd[1]: libpod-7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478.scope: Deactivated successfully.
Nov 29 05:08:19 compute-0 podman[76089]: 2025-11-29 05:08:19.991048303 +0000 UTC m=+0.716892489 container died 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 05:08:20 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2118182639' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:08:20 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2974597965' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 05:08:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2c354a69dcef5fcec12e8dc389644bc4d2fe81ff77311aa302eb7bac97dc670-merged.mount: Deactivated successfully.
Nov 29 05:08:20 compute-0 podman[76089]: 2025-11-29 05:08:20.908199954 +0000 UTC m=+1.634044110 container remove 7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478 (image=quay.io/ceph/ceph:v18, name=cranky_heisenberg, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:20 compute-0 systemd[1]: libpod-conmon-7008ae13047fbb01febc2674adbf417501f49aa519f6323dc044303c2ef7d478.scope: Deactivated successfully.
Nov 29 05:08:20 compute-0 podman[76148]: 2025-11-29 05:08:20.968640273 +0000 UTC m=+0.041400392 container create c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 05:08:20 compute-0 systemd[1]: Started libpod-conmon-c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff.scope.
Nov 29 05:08:21 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7badcbe791a3b62a217e878d983dbe8618da507fa246e953c5a8197ed761ade/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7badcbe791a3b62a217e878d983dbe8618da507fa246e953c5a8197ed761ade/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7badcbe791a3b62a217e878d983dbe8618da507fa246e953c5a8197ed761ade/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:21 compute-0 podman[76148]: 2025-11-29 05:08:21.039462462 +0000 UTC m=+0.112222611 container init c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:08:21 compute-0 podman[76148]: 2025-11-29 05:08:20.951303003 +0000 UTC m=+0.024063122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:21 compute-0 podman[76148]: 2025-11-29 05:08:21.048464069 +0000 UTC m=+0.121224188 container start c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:08:21 compute-0 podman[76148]: 2025-11-29 05:08:21.05213028 +0000 UTC m=+0.124890419 container attach c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:08:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 29 05:08:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1449341761' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 05:08:21 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1449341761' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 05:08:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1449341761' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  1: '-n'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  2: 'mgr.compute-0.csskcz'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  3: '-f'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  4: '--setuser'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  5: 'ceph'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  6: '--setgroup'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  7: 'ceph'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  8: '--default-log-to-file=false'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  9: '--default-log-to-journald=true'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 29 05:08:21 compute-0 ceph-mgr[75473]: mgr respawn  exe_path /proc/self/exe
Nov 29 05:08:21 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.csskcz(active, since 6s)
Nov 29 05:08:21 compute-0 systemd[1]: libpod-c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff.scope: Deactivated successfully.
Nov 29 05:08:21 compute-0 podman[76148]: 2025-11-29 05:08:21.90283931 +0000 UTC m=+0.975599439 container died c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7badcbe791a3b62a217e878d983dbe8618da507fa246e953c5a8197ed761ade-merged.mount: Deactivated successfully.
Nov 29 05:08:21 compute-0 podman[76148]: 2025-11-29 05:08:21.953940055 +0000 UTC m=+1.026700144 container remove c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff (image=quay.io/ceph/ceph:v18, name=affectionate_kirch, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:21 compute-0 systemd[1]: libpod-conmon-c6764f692102bac7ecc7a3948313759be98c1fdf1ac1219ba8e255b386ae71ff.scope: Deactivated successfully.
Nov 29 05:08:22 compute-0 podman[76202]: 2025-11-29 05:08:22.021871628 +0000 UTC m=+0.048227151 container create 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:08:22 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: ignoring --setuser ceph since I am not root
Nov 29 05:08:22 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: ignoring --setgroup ceph since I am not root
Nov 29 05:08:22 compute-0 ceph-mgr[75473]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 05:08:22 compute-0 ceph-mgr[75473]: pidfile_write: ignore empty --pid-file
Nov 29 05:08:22 compute-0 systemd[1]: Started libpod-conmon-68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56.scope.
Nov 29 05:08:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b4f8dfd48db842f8591d757bde0bc57c51b059a16480643654d3d770bcc194/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b4f8dfd48db842f8591d757bde0bc57c51b059a16480643654d3d770bcc194/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42b4f8dfd48db842f8591d757bde0bc57c51b059a16480643654d3d770bcc194/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:22 compute-0 podman[76202]: 2025-11-29 05:08:22.085505598 +0000 UTC m=+0.111861191 container init 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:08:22 compute-0 podman[76202]: 2025-11-29 05:08:22.091852467 +0000 UTC m=+0.118207980 container start 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 05:08:22 compute-0 podman[76202]: 2025-11-29 05:08:21.996586362 +0000 UTC m=+0.022941895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:22 compute-0 podman[76202]: 2025-11-29 05:08:22.095837415 +0000 UTC m=+0.122192968 container attach 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 05:08:22 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'alerts'
Nov 29 05:08:22 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:22.458+0000 7fa55b499140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 05:08:22 compute-0 ceph-mgr[75473]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 05:08:22 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'balancer'
Nov 29 05:08:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 05:08:22 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1544642393' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 05:08:22 compute-0 trusting_lamport[76242]: {
Nov 29 05:08:22 compute-0 trusting_lamport[76242]:     "epoch": 5,
Nov 29 05:08:22 compute-0 trusting_lamport[76242]:     "available": true,
Nov 29 05:08:22 compute-0 trusting_lamport[76242]:     "active_name": "compute-0.csskcz",
Nov 29 05:08:22 compute-0 trusting_lamport[76242]:     "num_standby": 0
Nov 29 05:08:22 compute-0 trusting_lamport[76242]: }
Nov 29 05:08:22 compute-0 systemd[1]: libpod-68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56.scope: Deactivated successfully.
Nov 29 05:08:22 compute-0 podman[76268]: 2025-11-29 05:08:22.688334267 +0000 UTC m=+0.023358514 container died 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 05:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-42b4f8dfd48db842f8591d757bde0bc57c51b059a16480643654d3d770bcc194-merged.mount: Deactivated successfully.
Nov 29 05:08:22 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:22.707+0000 7fa55b499140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 05:08:22 compute-0 ceph-mgr[75473]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 05:08:22 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'cephadm'
Nov 29 05:08:22 compute-0 podman[76268]: 2025-11-29 05:08:22.726522997 +0000 UTC m=+0.061547214 container remove 68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56 (image=quay.io/ceph/ceph:v18, name=trusting_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:08:22 compute-0 systemd[1]: libpod-conmon-68a43b3f3cf951bb777f537d1a3ccb16b12c44cc636d9d9fd371f7f057601a56.scope: Deactivated successfully.
Nov 29 05:08:22 compute-0 podman[76283]: 2025-11-29 05:08:22.791896775 +0000 UTC m=+0.040056692 container create 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:22 compute-0 systemd[1]: Started libpod-conmon-4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0.scope.
Nov 29 05:08:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeba4518fe5213be3d93c4d734e288e968bdc7eaf16c4c3f06848242cc91f1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeba4518fe5213be3d93c4d734e288e968bdc7eaf16c4c3f06848242cc91f1b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbeba4518fe5213be3d93c4d734e288e968bdc7eaf16c4c3f06848242cc91f1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:22 compute-0 podman[76283]: 2025-11-29 05:08:22.845621987 +0000 UTC m=+0.093781944 container init 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:08:22 compute-0 podman[76283]: 2025-11-29 05:08:22.850605126 +0000 UTC m=+0.098765043 container start 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 05:08:22 compute-0 podman[76283]: 2025-11-29 05:08:22.85350635 +0000 UTC m=+0.101666267 container attach 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:22 compute-0 podman[76283]: 2025-11-29 05:08:22.776816473 +0000 UTC m=+0.024976410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:22 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1449341761' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 05:08:22 compute-0 ceph-mon[75176]: mgrmap e5: compute-0.csskcz(active, since 6s)
Nov 29 05:08:22 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1544642393' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 05:08:24 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'crash'
Nov 29 05:08:24 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:24.876+0000 7fa55b499140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 05:08:24 compute-0 ceph-mgr[75473]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 05:08:24 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'dashboard'
Nov 29 05:08:26 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'devicehealth'
Nov 29 05:08:26 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:26.490+0000 7fa55b499140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 05:08:26 compute-0 ceph-mgr[75473]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 05:08:26 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 05:08:27 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 05:08:27 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 05:08:27 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]:   from numpy import show_config as show_numpy_config
Nov 29 05:08:27 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:27.020+0000 7fa55b499140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 05:08:27 compute-0 ceph-mgr[75473]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 05:08:27 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'influx'
Nov 29 05:08:27 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:27.250+0000 7fa55b499140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 05:08:27 compute-0 ceph-mgr[75473]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 05:08:27 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'insights'
Nov 29 05:08:27 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'iostat'
Nov 29 05:08:27 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:27.718+0000 7fa55b499140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 05:08:27 compute-0 ceph-mgr[75473]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 05:08:27 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'k8sevents'
Nov 29 05:08:29 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'localpool'
Nov 29 05:08:29 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 05:08:30 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'mirroring'
Nov 29 05:08:30 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'nfs'
Nov 29 05:08:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:31.241+0000 7fa55b499140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 05:08:31 compute-0 ceph-mgr[75473]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 05:08:31 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'orchestrator'
Nov 29 05:08:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:31.896+0000 7fa55b499140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 05:08:31 compute-0 ceph-mgr[75473]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 05:08:31 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 05:08:32 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:32.167+0000 7fa55b499140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 05:08:32 compute-0 ceph-mgr[75473]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 05:08:32 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'osd_support'
Nov 29 05:08:32 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:32.401+0000 7fa55b499140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 05:08:32 compute-0 ceph-mgr[75473]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 05:08:32 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 05:08:32 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:32.681+0000 7fa55b499140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 05:08:32 compute-0 ceph-mgr[75473]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 05:08:32 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'progress'
Nov 29 05:08:32 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:32.922+0000 7fa55b499140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 05:08:32 compute-0 ceph-mgr[75473]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 05:08:32 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'prometheus'
Nov 29 05:08:33 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:33.910+0000 7fa55b499140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 05:08:33 compute-0 ceph-mgr[75473]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 05:08:33 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rbd_support'
Nov 29 05:08:34 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:34.206+0000 7fa55b499140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 05:08:34 compute-0 ceph-mgr[75473]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 05:08:34 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'restful'
Nov 29 05:08:34 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rgw'
Nov 29 05:08:35 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:35.575+0000 7fa55b499140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 05:08:35 compute-0 ceph-mgr[75473]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 05:08:35 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'rook'
Nov 29 05:08:37 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:37.752+0000 7fa55b499140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 05:08:37 compute-0 ceph-mgr[75473]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 05:08:37 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'selftest'
Nov 29 05:08:38 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:38.006+0000 7fa55b499140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 05:08:38 compute-0 ceph-mgr[75473]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 05:08:38 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'snap_schedule'
Nov 29 05:08:38 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:38.259+0000 7fa55b499140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 05:08:38 compute-0 ceph-mgr[75473]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 05:08:38 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'stats'
Nov 29 05:08:38 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'status'
Nov 29 05:08:38 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:38.792+0000 7fa55b499140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 05:08:38 compute-0 ceph-mgr[75473]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 05:08:38 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'telegraf'
Nov 29 05:08:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:39.040+0000 7fa55b499140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 05:08:39 compute-0 ceph-mgr[75473]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 05:08:39 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'telemetry'
Nov 29 05:08:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:39.660+0000 7fa55b499140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 05:08:39 compute-0 ceph-mgr[75473]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 05:08:39 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 05:08:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:40.302+0000 7fa55b499140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 05:08:40 compute-0 ceph-mgr[75473]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 05:08:40 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'volumes'
Nov 29 05:08:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:40.977+0000 7fa55b499140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 05:08:40 compute-0 ceph-mgr[75473]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 05:08:40 compute-0 ceph-mgr[75473]: mgr[py] Loading python module 'zabbix'
Nov 29 05:08:41 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:08:41.221+0000 7fa55b499140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Active manager daemon compute-0.csskcz restarted
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.csskcz
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: ms_deliver_dispatch: unhandled message 0x56323f4bd1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr handle_mgr_map Activating!
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr handle_mgr_map I am now activating
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.csskcz(active, starting, since 0.0174883s)
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.csskcz", "id": "compute-0.csskcz"} v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr metadata", "who": "compute-0.csskcz", "id": "compute-0.csskcz"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: balancer
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Manager daemon compute-0.csskcz is now available
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Starting
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:08:41
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [balancer INFO root] No pools available
Nov 29 05:08:41 compute-0 ceph-mon[75176]: Active manager daemon compute-0.csskcz restarted
Nov 29 05:08:41 compute-0 ceph-mon[75176]: Activating manager daemon compute-0.csskcz
Nov 29 05:08:41 compute-0 ceph-mon[75176]: osdmap e2: 0 total, 0 up, 0 in
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mgrmap e6: compute-0.csskcz(active, starting, since 0.0174883s)
Nov 29 05:08:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr metadata", "who": "compute-0.csskcz", "id": "compute-0.csskcz"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mon[75176]: Manager daemon compute-0.csskcz is now available
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: cephadm
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: crash
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: devicehealth
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: iostat
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: nfs
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Starting
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: orchestrator
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: pg_autoscaler
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: progress
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [progress INFO root] Loading...
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [progress INFO root] No stored events to load
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [progress INFO root] Loaded [] historic events
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] recovery thread starting
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] starting setup
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: rbd_support
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: restful
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"} v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [restful WARNING root] server not running: no certificate configured
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: status
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: telemetry
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] PerfHandler: starting
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TaskHandler: starting
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"} v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"}]: dispatch
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] setup complete
Nov 29 05:08:41 compute-0 ceph-mgr[75473]: mgr load Constructed class from module: volumes
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 29 05:08:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:42 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 05:08:42 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.csskcz(active, since 1.02909s)
Nov 29 05:08:42 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 05:08:42 compute-0 elegant_tesla[76299]: {
Nov 29 05:08:42 compute-0 elegant_tesla[76299]:     "mgrmap_epoch": 7,
Nov 29 05:08:42 compute-0 elegant_tesla[76299]:     "initialized": true
Nov 29 05:08:42 compute-0 elegant_tesla[76299]: }
Nov 29 05:08:42 compute-0 systemd[1]: libpod-4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0.scope: Deactivated successfully.
Nov 29 05:08:42 compute-0 ceph-mon[75176]: Found migration_current of "None". Setting to last migration.
Nov 29 05:08:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:08:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:08:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/mirror_snapshot_schedule"}]: dispatch
Nov 29 05:08:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.csskcz/trash_purge_schedule"}]: dispatch
Nov 29 05:08:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:42 compute-0 ceph-mon[75176]: mgrmap e7: compute-0.csskcz(active, since 1.02909s)
Nov 29 05:08:42 compute-0 podman[76446]: 2025-11-29 05:08:42.362748483 +0000 UTC m=+0.041325310 container died 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:08:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbeba4518fe5213be3d93c4d734e288e968bdc7eaf16c4c3f06848242cc91f1b-merged.mount: Deactivated successfully.
Nov 29 05:08:42 compute-0 podman[76446]: 2025-11-29 05:08:42.416487085 +0000 UTC m=+0.095063872 container remove 4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0 (image=quay.io/ceph/ceph:v18, name=elegant_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:08:42 compute-0 systemd[1]: libpod-conmon-4becd377cf32c003e07623ced4b55fcaa9a45ee4728ac6df88c26d6b44d67be0.scope: Deactivated successfully.
Nov 29 05:08:42 compute-0 podman[76461]: 2025-11-29 05:08:42.528238513 +0000 UTC m=+0.071071125 container create 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:42 compute-0 systemd[1]: Started libpod-conmon-8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70.scope.
Nov 29 05:08:42 compute-0 podman[76461]: 2025-11-29 05:08:42.499790647 +0000 UTC m=+0.042623319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0251c25a78c1ebd39941e1711de8479d00eda72e95e7c3d036e3fcf75618e531/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0251c25a78c1ebd39941e1711de8479d00eda72e95e7c3d036e3fcf75618e531/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0251c25a78c1ebd39941e1711de8479d00eda72e95e7c3d036e3fcf75618e531/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:42 compute-0 podman[76461]: 2025-11-29 05:08:42.616660497 +0000 UTC m=+0.159493079 container init 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:08:42 compute-0 podman[76461]: 2025-11-29 05:08:42.630715027 +0000 UTC m=+0.173547619 container start 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:08:42 compute-0 podman[76461]: 2025-11-29 05:08:42.634238725 +0000 UTC m=+0.177071307 container attach 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:08:42 compute-0 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Nov/2025:05:08:42] ENGINE Bus STARTING
Nov 29 05:08:42 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Nov/2025:05:08:42] ENGINE Bus STARTING
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Nov/2025:05:08:43] ENGINE Serving on http://192.168.122.100:8765
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Nov/2025:05:08:43] ENGINE Serving on http://192.168.122.100:8765
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 29 05:08:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 05:08:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Nov/2025:05:08:43] ENGINE Serving on https://192.168.122.100:7150
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Nov/2025:05:08:43] ENGINE Serving on https://192.168.122.100:7150
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Nov/2025:05:08:43] ENGINE Bus STARTED
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Nov/2025:05:08:43] ENGINE Bus STARTED
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: [cephadm INFO cherrypy.error] [29/Nov/2025:05:08:43] ENGINE Client ('192.168.122.100', 36438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 05:08:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : [29/Nov/2025:05:08:43] ENGINE Client ('192.168.122.100', 36438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 05:08:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:08:43 compute-0 systemd[1]: libpod-8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70.scope: Deactivated successfully.
Nov 29 05:08:43 compute-0 podman[76461]: 2025-11-29 05:08:43.220845537 +0000 UTC m=+0.763678159 container died 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 05:08:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0251c25a78c1ebd39941e1711de8479d00eda72e95e7c3d036e3fcf75618e531-merged.mount: Deactivated successfully.
Nov 29 05:08:43 compute-0 podman[76461]: 2025-11-29 05:08:43.28150499 +0000 UTC m=+0.824337602 container remove 8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70 (image=quay.io/ceph/ceph:v18, name=admiring_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 05:08:43 compute-0 systemd[1]: libpod-conmon-8817fdeb46fd4c95e0d3a56a6080acf8c0dc6d87b389509de8cba6dd37a13a70.scope: Deactivated successfully.
Nov 29 05:08:43 compute-0 ceph-mon[75176]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 05:08:43 compute-0 ceph-mon[75176]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 05:08:43 compute-0 ceph-mon[75176]: [29/Nov/2025:05:08:42] ENGINE Bus STARTING
Nov 29 05:08:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:08:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:08:43 compute-0 podman[76541]: 2025-11-29 05:08:43.349552567 +0000 UTC m=+0.050593463 container create 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:43 compute-0 systemd[1]: Started libpod-conmon-5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1.scope.
Nov 29 05:08:43 compute-0 podman[76541]: 2025-11-29 05:08:43.324795913 +0000 UTC m=+0.025836869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a2dcd0b623cd957e5f2f420d547de90bec17fabf1163e2db8f563c96b30429/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a2dcd0b623cd957e5f2f420d547de90bec17fabf1163e2db8f563c96b30429/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38a2dcd0b623cd957e5f2f420d547de90bec17fabf1163e2db8f563c96b30429/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:43 compute-0 podman[76541]: 2025-11-29 05:08:43.446779796 +0000 UTC m=+0.147820732 container init 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:08:43 compute-0 podman[76541]: 2025-11-29 05:08:43.458234307 +0000 UTC m=+0.159275213 container start 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:08:43 compute-0 podman[76541]: 2025-11-29 05:08:43.462237895 +0000 UTC m=+0.163278841 container attach 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 29 05:08:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_user
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 29 05:08:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 29 05:08:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_config
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 29 05:08:43 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 29 05:08:43 compute-0 busy_morse[76558]: ssh user set to ceph-admin. sudo will be used
Nov 29 05:08:44 compute-0 systemd[1]: libpod-5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1.scope: Deactivated successfully.
Nov 29 05:08:44 compute-0 podman[76541]: 2025-11-29 05:08:44.009907951 +0000 UTC m=+0.710948857 container died 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:08:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-38a2dcd0b623cd957e5f2f420d547de90bec17fabf1163e2db8f563c96b30429-merged.mount: Deactivated successfully.
Nov 29 05:08:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019920999 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:08:44 compute-0 podman[76541]: 2025-11-29 05:08:44.049626765 +0000 UTC m=+0.750667651 container remove 5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1 (image=quay.io/ceph/ceph:v18, name=busy_morse, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:08:44 compute-0 systemd[1]: libpod-conmon-5fb3bdef0c7d1c0bfe8fd154746dbeb1ec18079a32c593ead0ec2ae3f0bba6f1.scope: Deactivated successfully.
Nov 29 05:08:44 compute-0 podman[76596]: 2025-11-29 05:08:44.102299383 +0000 UTC m=+0.036501974 container create 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:44 compute-0 systemd[1]: Started libpod-conmon-0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337.scope.
Nov 29 05:08:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:44 compute-0 podman[76596]: 2025-11-29 05:08:44.158880098 +0000 UTC m=+0.093082699 container init 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:08:44 compute-0 podman[76596]: 2025-11-29 05:08:44.168463109 +0000 UTC m=+0.102665710 container start 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:44 compute-0 podman[76596]: 2025-11-29 05:08:44.172459576 +0000 UTC m=+0.106662207 container attach 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 05:08:44 compute-0 podman[76596]: 2025-11-29 05:08:44.083177813 +0000 UTC m=+0.017380404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:44 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.csskcz(active, since 2s)
Nov 29 05:08:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 29 05:08:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:44 compute-0 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 29 05:08:44 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 29 05:08:44 compute-0 ceph-mgr[75473]: [cephadm INFO root] Set ssh private key
Nov 29 05:08:44 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 29 05:08:44 compute-0 systemd[1]: libpod-0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337.scope: Deactivated successfully.
Nov 29 05:08:44 compute-0 conmon[76612]: conmon 0816a30fe9916fc923db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337.scope/container/memory.events
Nov 29 05:08:44 compute-0 podman[76596]: 2025-11-29 05:08:44.700376188 +0000 UTC m=+0.634578759 container died 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:08:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1219f3fb10abf1722d43948a24a7c47b28ab046b4ba0cf0cbde10daca2e93a29-merged.mount: Deactivated successfully.
Nov 29 05:08:44 compute-0 podman[76596]: 2025-11-29 05:08:44.741547513 +0000 UTC m=+0.675750094 container remove 0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337 (image=quay.io/ceph/ceph:v18, name=bold_mcnulty, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:08:44 compute-0 systemd[1]: libpod-conmon-0816a30fe9916fc923dbdc3b6ea103dd134ac303296db3de8653226a2bb44337.scope: Deactivated successfully.
Nov 29 05:08:44 compute-0 podman[76651]: 2025-11-29 05:08:44.798013785 +0000 UTC m=+0.040673956 container create 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 05:08:44 compute-0 systemd[1]: Started libpod-conmon-3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c.scope.
Nov 29 05:08:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:44 compute-0 podman[76651]: 2025-11-29 05:08:44.780027729 +0000 UTC m=+0.022687890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:44 compute-0 podman[76651]: 2025-11-29 05:08:44.886127893 +0000 UTC m=+0.128788064 container init 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:44 compute-0 podman[76651]: 2025-11-29 05:08:44.898301421 +0000 UTC m=+0.140961562 container start 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:44 compute-0 podman[76651]: 2025-11-29 05:08:44.901827478 +0000 UTC m=+0.144487809 container attach 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:08:44 compute-0 ceph-mon[75176]: [29/Nov/2025:05:08:43] ENGINE Serving on http://192.168.122.100:8765
Nov 29 05:08:44 compute-0 ceph-mon[75176]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:44 compute-0 ceph-mon[75176]: [29/Nov/2025:05:08:43] ENGINE Serving on https://192.168.122.100:7150
Nov 29 05:08:44 compute-0 ceph-mon[75176]: [29/Nov/2025:05:08:43] ENGINE Bus STARTED
Nov 29 05:08:44 compute-0 ceph-mon[75176]: [29/Nov/2025:05:08:43] ENGINE Client ('192.168.122.100', 36438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 05:08:44 compute-0 ceph-mon[75176]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:44 compute-0 ceph-mon[75176]: Set ssh ssh_user
Nov 29 05:08:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:44 compute-0 ceph-mon[75176]: Set ssh ssh_config
Nov 29 05:08:44 compute-0 ceph-mon[75176]: ssh user set to ceph-admin. sudo will be used
Nov 29 05:08:44 compute-0 ceph-mon[75176]: mgrmap e8: compute-0.csskcz(active, since 2s)
Nov 29 05:08:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:45 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 05:08:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 29 05:08:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:45 compute-0 ceph-mgr[75473]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 29 05:08:45 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 29 05:08:45 compute-0 systemd[1]: libpod-3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c.scope: Deactivated successfully.
Nov 29 05:08:45 compute-0 podman[76651]: 2025-11-29 05:08:45.408574184 +0000 UTC m=+0.651234315 container died 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-00da65f993741f48119f9ab496608d59087e9960aa9cc40912291c074034e179-merged.mount: Deactivated successfully.
Nov 29 05:08:45 compute-0 podman[76651]: 2025-11-29 05:08:45.445375354 +0000 UTC m=+0.688035485 container remove 3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c (image=quay.io/ceph/ceph:v18, name=elegant_swanson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:08:45 compute-0 systemd[1]: libpod-conmon-3366ad7fe7579d9949d118d7b67beddddbfcad1a24dff6d9cfdda04e9c25ce6c.scope: Deactivated successfully.
Nov 29 05:08:45 compute-0 podman[76706]: 2025-11-29 05:08:45.505192609 +0000 UTC m=+0.043432807 container create 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:08:45 compute-0 systemd[1]: Started libpod-conmon-50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2.scope.
Nov 29 05:08:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba43f91b1c1732213a5a92c48bf2598485b118894ea18b678675ae37b06ac2c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba43f91b1c1732213a5a92c48bf2598485b118894ea18b678675ae37b06ac2c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba43f91b1c1732213a5a92c48bf2598485b118894ea18b678675ae37b06ac2c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:45 compute-0 podman[76706]: 2025-11-29 05:08:45.48435619 +0000 UTC m=+0.022596458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:45 compute-0 podman[76706]: 2025-11-29 05:08:45.592950019 +0000 UTC m=+0.131190277 container init 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:45 compute-0 podman[76706]: 2025-11-29 05:08:45.601918126 +0000 UTC m=+0.140158314 container start 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:08:45 compute-0 podman[76706]: 2025-11-29 05:08:45.605426334 +0000 UTC m=+0.143666592 container attach 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:45 compute-0 ceph-mon[75176]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:45 compute-0 ceph-mon[75176]: Set ssh ssh_identity_key
Nov 29 05:08:45 compute-0 ceph-mon[75176]: Set ssh private key
Nov 29 05:08:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:46 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:46 compute-0 nifty_keller[76723]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDS8PFZNHYEMG3QzJ0T+fsq/aAtPRNTx+aMKwd2m8VVUO19nX++nSeRt/d87czPj4O9wzAPcH/7BshbPhc5xJnUkkoie96X/xYUNJBzgPQ4C/dMz82vVAk18swfRLBdsW74BqGEu7OERVdC7Y/xtEZFAjVKTOVZYkAYbfZmvu44ueA6sdnziaQMAmYvaOUziZoMxb3in8kywmEgIPvNgynAuegdw1FsImfkj93iNTkAl3rt88tuZuEyivCdteCLNGs4gfAF486hIPVkr8c47sBLgeg/miI6UmsvJmZvUwcTFkJpfkr00fwvW85N5NVrKsd0ZrcJuYQHbylSWbgXPdHWDIMsc0DmLPgyBS3+KP6Z/1lceD5uCbPPibt7CfECZw5WGJ1esNQTBxNIw57Vi4zW0dT227oG7qCoWQ3pkr7UGt2XDzM8Fek1Z9GigPmtTTmcWypU9skH74gbbAcVFyD9Cl9GEwE6Kfyy6OuFPR/QBCYYcXV0+wlJxxr3VRdVQ40= zuul@controller
Nov 29 05:08:46 compute-0 systemd[1]: libpod-50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2.scope: Deactivated successfully.
Nov 29 05:08:46 compute-0 podman[76706]: 2025-11-29 05:08:46.098168771 +0000 UTC m=+0.636408989 container died 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:08:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba43f91b1c1732213a5a92c48bf2598485b118894ea18b678675ae37b06ac2c2-merged.mount: Deactivated successfully.
Nov 29 05:08:46 compute-0 podman[76706]: 2025-11-29 05:08:46.149426318 +0000 UTC m=+0.687666516 container remove 50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2 (image=quay.io/ceph/ceph:v18, name=nifty_keller, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:46 compute-0 systemd[1]: libpod-conmon-50c57761956fce723d63dc77b415d90a9ae00a393504e0d855202c56600770b2.scope: Deactivated successfully.
Nov 29 05:08:46 compute-0 podman[76762]: 2025-11-29 05:08:46.217534926 +0000 UTC m=+0.046841291 container create 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 05:08:46 compute-0 systemd[1]: Started libpod-conmon-34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee.scope.
Nov 29 05:08:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:46 compute-0 podman[76762]: 2025-11-29 05:08:46.191853782 +0000 UTC m=+0.021160137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae6aed02d888d1428d498c1fe2320c16ab6288925c04374e065af069b2fdb2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae6aed02d888d1428d498c1fe2320c16ab6288925c04374e065af069b2fdb2c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bae6aed02d888d1428d498c1fe2320c16ab6288925c04374e065af069b2fdb2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:46 compute-0 podman[76762]: 2025-11-29 05:08:46.302308221 +0000 UTC m=+0.131614606 container init 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:08:46 compute-0 podman[76762]: 2025-11-29 05:08:46.311690787 +0000 UTC m=+0.140997142 container start 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:08:46 compute-0 podman[76762]: 2025-11-29 05:08:46.315254256 +0000 UTC m=+0.144560631 container attach 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 05:08:46 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:47 compute-0 ceph-mon[75176]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:47 compute-0 ceph-mon[75176]: Set ssh ssh_identity_pub
Nov 29 05:08:47 compute-0 sshd-session[76805]: Accepted publickey for ceph-admin from 192.168.122.100 port 57070 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:47 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 05:08:47 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 05:08:47 compute-0 systemd-logind[793]: New session 20 of user ceph-admin.
Nov 29 05:08:47 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 05:08:47 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 29 05:08:47 compute-0 systemd[76809]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:47 compute-0 sshd-session[76814]: Accepted publickey for ceph-admin from 192.168.122.100 port 57074 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:47 compute-0 systemd-logind[793]: New session 22 of user ceph-admin.
Nov 29 05:08:47 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 05:08:47 compute-0 systemd[76809]: Queued start job for default target Main User Target.
Nov 29 05:08:47 compute-0 systemd[76809]: Created slice User Application Slice.
Nov 29 05:08:47 compute-0 systemd[76809]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 05:08:47 compute-0 systemd[76809]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 05:08:47 compute-0 systemd[76809]: Reached target Paths.
Nov 29 05:08:47 compute-0 systemd[76809]: Reached target Timers.
Nov 29 05:08:47 compute-0 systemd[76809]: Starting D-Bus User Message Bus Socket...
Nov 29 05:08:47 compute-0 systemd[76809]: Starting Create User's Volatile Files and Directories...
Nov 29 05:08:47 compute-0 systemd[76809]: Finished Create User's Volatile Files and Directories.
Nov 29 05:08:47 compute-0 systemd[76809]: Listening on D-Bus User Message Bus Socket.
Nov 29 05:08:47 compute-0 systemd[76809]: Reached target Sockets.
Nov 29 05:08:47 compute-0 systemd[76809]: Reached target Basic System.
Nov 29 05:08:47 compute-0 systemd[76809]: Reached target Main User Target.
Nov 29 05:08:47 compute-0 systemd[76809]: Startup finished in 164ms.
Nov 29 05:08:47 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 29 05:08:47 compute-0 systemd[1]: Started Session 20 of User ceph-admin.
Nov 29 05:08:47 compute-0 systemd[1]: Started Session 22 of User ceph-admin.
Nov 29 05:08:47 compute-0 sshd-session[76805]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:47 compute-0 sshd-session[76814]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:47 compute-0 sudo[76828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:47 compute-0 sudo[76828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:47 compute-0 sudo[76828]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:47 compute-0 sudo[76853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:08:47 compute-0 sudo[76853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:47 compute-0 sudo[76853]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:47 compute-0 sshd-session[76878]: Accepted publickey for ceph-admin from 192.168.122.100 port 57086 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:47 compute-0 systemd-logind[793]: New session 23 of user ceph-admin.
Nov 29 05:08:47 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 29 05:08:47 compute-0 sshd-session[76878]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:47 compute-0 sudo[76882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:47 compute-0 sudo[76882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:47 compute-0 sudo[76882]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:47 compute-0 sudo[76907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 05:08:47 compute-0 sudo[76907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:47 compute-0 sudo[76907]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:48 compute-0 ceph-mon[75176]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:48 compute-0 ceph-mon[75176]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:48 compute-0 sshd-session[76932]: Accepted publickey for ceph-admin from 192.168.122.100 port 57088 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:48 compute-0 systemd-logind[793]: New session 24 of user ceph-admin.
Nov 29 05:08:48 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 05:08:48 compute-0 sshd-session[76932]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:48 compute-0 sudo[76936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:48 compute-0 sudo[76936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:48 compute-0 sudo[76936]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:48 compute-0 sudo[76961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 29 05:08:48 compute-0 sudo[76961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:48 compute-0 sudo[76961]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:48 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 29 05:08:48 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 29 05:08:48 compute-0 sshd-session[76986]: Accepted publickey for ceph-admin from 192.168.122.100 port 57092 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:48 compute-0 systemd-logind[793]: New session 25 of user ceph-admin.
Nov 29 05:08:48 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 05:08:48 compute-0 sshd-session[76986]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:48 compute-0 sudo[76990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:48 compute-0 sudo[76990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:48 compute-0 sudo[76990]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:48 compute-0 sudo[77015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:08:48 compute-0 sudo[77015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:48 compute-0 sudo[77015]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:49 compute-0 sshd-session[77040]: Accepted publickey for ceph-admin from 192.168.122.100 port 57104 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:49 compute-0 systemd-logind[793]: New session 26 of user ceph-admin.
Nov 29 05:08:49 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 05:08:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052989 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:08:49 compute-0 sshd-session[77040]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:49 compute-0 ceph-mon[75176]: Deploying cephadm binary to compute-0
Nov 29 05:08:49 compute-0 sudo[77044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:49 compute-0 sudo[77044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:49 compute-0 sudo[77044]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:49 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 05:08:49 compute-0 sudo[77069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:08:49 compute-0 sudo[77069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:49 compute-0 sudo[77069]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:49 compute-0 sshd-session[77094]: Accepted publickey for ceph-admin from 192.168.122.100 port 57112 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:49 compute-0 systemd-logind[793]: New session 27 of user ceph-admin.
Nov 29 05:08:49 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 05:08:49 compute-0 sshd-session[77094]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:49 compute-0 sudo[77098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:49 compute-0 sudo[77098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:49 compute-0 sudo[77098]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:49 compute-0 sudo[77123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 29 05:08:49 compute-0 sudo[77123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:49 compute-0 sudo[77123]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:50 compute-0 sshd-session[77148]: Accepted publickey for ceph-admin from 192.168.122.100 port 57128 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:50 compute-0 systemd-logind[793]: New session 28 of user ceph-admin.
Nov 29 05:08:50 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 05:08:50 compute-0 sshd-session[77148]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:50 compute-0 sudo[77152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:50 compute-0 sudo[77152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:50 compute-0 sudo[77152]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:50 compute-0 sudo[77177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:08:50 compute-0 sudo[77177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:50 compute-0 sudo[77177]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:50 compute-0 sshd-session[77202]: Accepted publickey for ceph-admin from 192.168.122.100 port 53564 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:50 compute-0 systemd-logind[793]: New session 29 of user ceph-admin.
Nov 29 05:08:50 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 05:08:50 compute-0 sshd-session[77202]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:50 compute-0 sudo[77206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:50 compute-0 sudo[77206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:50 compute-0 sudo[77206]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:50 compute-0 sudo[77231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 29 05:08:50 compute-0 sudo[77231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:50 compute-0 sudo[77231]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:50 compute-0 sshd-session[77256]: Accepted publickey for ceph-admin from 192.168.122.100 port 53568 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:50 compute-0 systemd-logind[793]: New session 30 of user ceph-admin.
Nov 29 05:08:50 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 05:08:50 compute-0 sshd-session[77256]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:51 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 05:08:51 compute-0 sshd-session[77283]: Accepted publickey for ceph-admin from 192.168.122.100 port 53582 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:51 compute-0 systemd-logind[793]: New session 31 of user ceph-admin.
Nov 29 05:08:51 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 05:08:51 compute-0 sshd-session[77283]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:51 compute-0 sudo[77287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:51 compute-0 sudo[77287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:51 compute-0 sudo[77287]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:51 compute-0 sudo[77312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 29 05:08:51 compute-0 sudo[77312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:51 compute-0 sudo[77312]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:51 compute-0 sshd-session[77337]: Accepted publickey for ceph-admin from 192.168.122.100 port 53598 ssh2: RSA SHA256:2gEq/BAiefvZx/haw6y1weuTlTeVTLDQlcaQNuNHhGU
Nov 29 05:08:51 compute-0 systemd-logind[793]: New session 32 of user ceph-admin.
Nov 29 05:08:51 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 05:08:51 compute-0 sshd-session[77337]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 29 05:08:51 compute-0 sudo[77341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:51 compute-0 sudo[77341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:51 compute-0 sudo[77341]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:52 compute-0 sudo[77366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 05:08:52 compute-0 sudo[77366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:52 compute-0 sudo[77366]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 05:08:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:52 compute-0 ceph-mgr[75473]: [cephadm INFO root] Added host compute-0
Nov 29 05:08:52 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 05:08:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 05:08:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:08:52 compute-0 compassionate_solomon[76779]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 05:08:52 compute-0 systemd[1]: libpod-34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee.scope: Deactivated successfully.
Nov 29 05:08:52 compute-0 podman[76762]: 2025-11-29 05:08:52.322058971 +0000 UTC m=+6.151365346 container died 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:08:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-bae6aed02d888d1428d498c1fe2320c16ab6288925c04374e065af069b2fdb2c-merged.mount: Deactivated successfully.
Nov 29 05:08:52 compute-0 podman[76762]: 2025-11-29 05:08:52.374482554 +0000 UTC m=+6.203788879 container remove 34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee (image=quay.io/ceph/ceph:v18, name=compassionate_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:52 compute-0 sudo[77412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:52 compute-0 sudo[77412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:52 compute-0 sudo[77412]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:52 compute-0 systemd[1]: libpod-conmon-34e4f1bc96bba7cd57c65e57573d8324483bd0ca64814d0c867e4356faf736ee.scope: Deactivated successfully.
Nov 29 05:08:52 compute-0 podman[77446]: 2025-11-29 05:08:52.443666676 +0000 UTC m=+0.049646964 container create 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 05:08:52 compute-0 sudo[77454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:08:52 compute-0 sudo[77454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:52 compute-0 sudo[77454]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:52 compute-0 systemd[1]: Started libpod-conmon-9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e.scope.
Nov 29 05:08:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:52 compute-0 podman[77446]: 2025-11-29 05:08:52.419177087 +0000 UTC m=+0.025157405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:52 compute-0 sudo[77488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:52 compute-0 sudo[77488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f027198f963c5ea959aea8e8b9538606e1f4e45cb677c030cd1715b80a9da2a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:52 compute-0 sudo[77488]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f027198f963c5ea959aea8e8b9538606e1f4e45cb677c030cd1715b80a9da2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f027198f963c5ea959aea8e8b9538606e1f4e45cb677c030cd1715b80a9da2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:52 compute-0 podman[77446]: 2025-11-29 05:08:52.530880324 +0000 UTC m=+0.136860622 container init 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:08:52 compute-0 podman[77446]: 2025-11-29 05:08:52.537352656 +0000 UTC m=+0.143332934 container start 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 05:08:52 compute-0 podman[77446]: 2025-11-29 05:08:52.540494605 +0000 UTC m=+0.146474913 container attach 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 05:08:52 compute-0 sudo[77516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 29 05:08:52 compute-0 sudo[77516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:52 compute-0 podman[77569]: 2025-11-29 05:08:52.84035351 +0000 UTC m=+0.061879162 container create 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:52 compute-0 systemd[1]: Started libpod-conmon-0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550.scope.
Nov 29 05:08:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:52 compute-0 podman[77569]: 2025-11-29 05:08:52.899147563 +0000 UTC m=+0.120673215 container init 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:08:52 compute-0 podman[77569]: 2025-11-29 05:08:52.911328711 +0000 UTC m=+0.132854403 container start 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:08:52 compute-0 podman[77569]: 2025-11-29 05:08:52.817713252 +0000 UTC m=+0.039238964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:52 compute-0 podman[77569]: 2025-11-29 05:08:52.915287239 +0000 UTC m=+0.136812891 container attach 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:53 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:53 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 29 05:08:53 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 29 05:08:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 05:08:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:53 compute-0 wonderful_driscoll[77493]: Scheduled mon update...
Nov 29 05:08:53 compute-0 systemd[1]: libpod-9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e.scope: Deactivated successfully.
Nov 29 05:08:53 compute-0 podman[77446]: 2025-11-29 05:08:53.088325154 +0000 UTC m=+0.694305472 container died 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 05:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f027198f963c5ea959aea8e8b9538606e1f4e45cb677c030cd1715b80a9da2a-merged.mount: Deactivated successfully.
Nov 29 05:08:53 compute-0 podman[77446]: 2025-11-29 05:08:53.128830965 +0000 UTC m=+0.734811283 container remove 9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e (image=quay.io/ceph/ceph:v18, name=wonderful_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:08:53 compute-0 systemd[1]: libpod-conmon-9c2871c3d1700230eff3dfe34afebfce84855bbf38964b91a9a002601b76da8e.scope: Deactivated successfully.
Nov 29 05:08:53 compute-0 podman[77624]: 2025-11-29 05:08:53.191013103 +0000 UTC m=+0.035649655 container create 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:08:53 compute-0 relaxed_davinci[77604]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 05:08:53 compute-0 podman[77569]: 2025-11-29 05:08:53.21222517 +0000 UTC m=+0.433750822 container died 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:08:53 compute-0 systemd[1]: Started libpod-conmon-786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608.scope.
Nov 29 05:08:53 compute-0 systemd[1]: libpod-0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550.scope: Deactivated successfully.
Nov 29 05:08:53 compute-0 podman[77569]: 2025-11-29 05:08:53.244416858 +0000 UTC m=+0.465942510 container remove 0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550 (image=quay.io/ceph/ceph:v18, name=relaxed_davinci, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:53 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 05:08:53 compute-0 systemd[1]: libpod-conmon-0b32665811ece61133a908ab34e39fb7a43ac9b456fe9634e62963f61e7ca550.scope: Deactivated successfully.
Nov 29 05:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a2bc965a3bc3e736c0c96f3ec2e384474f8ff79150e9344085e098ed26f2dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a2bc965a3bc3e736c0c96f3ec2e384474f8ff79150e9344085e098ed26f2dc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a2bc965a3bc3e736c0c96f3ec2e384474f8ff79150e9344085e098ed26f2dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:53 compute-0 podman[77624]: 2025-11-29 05:08:53.262348222 +0000 UTC m=+0.106984784 container init 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:08:53 compute-0 podman[77624]: 2025-11-29 05:08:53.267721679 +0000 UTC m=+0.112358231 container start 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 05:08:53 compute-0 podman[77624]: 2025-11-29 05:08:53.270631584 +0000 UTC m=+0.115268166 container attach 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:08:53 compute-0 sudo[77516]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:53 compute-0 podman[77624]: 2025-11-29 05:08:53.175935601 +0000 UTC m=+0.020572173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 29 05:08:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:53 compute-0 ceph-mon[75176]: Added host compute-0
Nov 29 05:08:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:08:53 compute-0 ceph-mon[75176]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:53 compute-0 ceph-mon[75176]: Saving service mon spec with placement count:5
Nov 29 05:08:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:53 compute-0 sudo[77657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:53 compute-0 sudo[77657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:53 compute-0 sudo[77657]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd8e940ee27e6268b82c1967ba0634fbd526883a094ff47b788fb91b10daf543-merged.mount: Deactivated successfully.
Nov 29 05:08:53 compute-0 sudo[77682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:08:53 compute-0 sudo[77682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:53 compute-0 sudo[77682]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:53 compute-0 sudo[77707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:53 compute-0 sudo[77707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:53 compute-0 sudo[77707]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:53 compute-0 sudo[77732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 05:08:53 compute-0 sudo[77732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:53 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:53 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 29 05:08:53 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 29 05:08:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 05:08:53 compute-0 sudo[77732]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:53 compute-0 optimistic_blackwell[77647]: Scheduled mgr update...
Nov 29 05:08:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:08:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:53 compute-0 systemd[1]: libpod-786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608.scope: Deactivated successfully.
Nov 29 05:08:53 compute-0 podman[77624]: 2025-11-29 05:08:53.817027992 +0000 UTC m=+0.661664574 container died 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4a2bc965a3bc3e736c0c96f3ec2e384474f8ff79150e9344085e098ed26f2dc-merged.mount: Deactivated successfully.
Nov 29 05:08:53 compute-0 sudo[77796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:53 compute-0 sudo[77796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:53 compute-0 sudo[77796]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:53 compute-0 sudo[77832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:08:53 compute-0 sudo[77832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:53 compute-0 sudo[77832]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:54 compute-0 sudo[77857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:54 compute-0 sudo[77857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:54 compute-0 sudo[77857]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:08:54 compute-0 podman[77624]: 2025-11-29 05:08:54.160884254 +0000 UTC m=+1.005520846 container remove 786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608 (image=quay.io/ceph/ceph:v18, name=optimistic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 05:08:54 compute-0 sudo[77882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:08:54 compute-0 sudo[77882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:54 compute-0 systemd[1]: libpod-conmon-786223cac3759f420973e2d811afed7a81946ac8809b0af1e5061c0257c0f608.scope: Deactivated successfully.
Nov 29 05:08:54 compute-0 podman[77905]: 2025-11-29 05:08:54.2366535 +0000 UTC m=+0.045769207 container create 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:08:54 compute-0 systemd[1]: Started libpod-conmon-424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4.scope.
Nov 29 05:08:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e21dcb5013ea65d53f8c826985c973a8845b05ecf02f8b75027925802ed678/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e21dcb5013ea65d53f8c826985c973a8845b05ecf02f8b75027925802ed678/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e21dcb5013ea65d53f8c826985c973a8845b05ecf02f8b75027925802ed678/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:54 compute-0 podman[77905]: 2025-11-29 05:08:54.222391897 +0000 UTC m=+0.031507624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:54 compute-0 podman[77905]: 2025-11-29 05:08:54.331840224 +0000 UTC m=+0.140956001 container init 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:54 compute-0 podman[77905]: 2025-11-29 05:08:54.33937176 +0000 UTC m=+0.148487507 container start 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 05:08:54 compute-0 podman[77905]: 2025-11-29 05:08:54.343306547 +0000 UTC m=+0.152422304 container attach 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:54 compute-0 podman[78020]: 2025-11-29 05:08:54.738209212 +0000 UTC m=+0.052181139 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:08:54 compute-0 ceph-mon[75176]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:54 compute-0 ceph-mon[75176]: Saving service mgr spec with placement count:2
Nov 29 05:08:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:54 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:54 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service crash spec with placement *
Nov 29 05:08:54 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 29 05:08:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 05:08:54 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:54 compute-0 gifted_agnesi[77924]: Scheduled crash update...
Nov 29 05:08:54 compute-0 systemd[1]: libpod-424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4.scope: Deactivated successfully.
Nov 29 05:08:54 compute-0 podman[77905]: 2025-11-29 05:08:54.864130122 +0000 UTC m=+0.673245839 container died 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 05:08:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7e21dcb5013ea65d53f8c826985c973a8845b05ecf02f8b75027925802ed678-merged.mount: Deactivated successfully.
Nov 29 05:08:54 compute-0 podman[77905]: 2025-11-29 05:08:54.927843873 +0000 UTC m=+0.736959610 container remove 424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4 (image=quay.io/ceph/ceph:v18, name=gifted_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:08:54 compute-0 systemd[1]: libpod-conmon-424e2706b84063d15017a6e72e8b3158cfa3ff28127df8c41924868ced672df4.scope: Deactivated successfully.
Nov 29 05:08:54 compute-0 podman[78055]: 2025-11-29 05:08:54.993503527 +0000 UTC m=+0.045327847 container create 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:08:55 compute-0 systemd[1]: Started libpod-conmon-597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769.scope.
Nov 29 05:08:55 compute-0 podman[78020]: 2025-11-29 05:08:55.04820788 +0000 UTC m=+0.362179807 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d401b04751ac85038805936015ac0c12b14bf8a5c7dea80b0b4a7e742ce552/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d401b04751ac85038805936015ac0c12b14bf8a5c7dea80b0b4a7e742ce552/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d401b04751ac85038805936015ac0c12b14bf8a5c7dea80b0b4a7e742ce552/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:55 compute-0 podman[78055]: 2025-11-29 05:08:54.972084636 +0000 UTC m=+0.023909036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:55 compute-0 podman[78055]: 2025-11-29 05:08:55.076046193 +0000 UTC m=+0.127870533 container init 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:08:55 compute-0 podman[78055]: 2025-11-29 05:08:55.082122186 +0000 UTC m=+0.133946506 container start 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:08:55 compute-0 podman[78055]: 2025-11-29 05:08:55.085701395 +0000 UTC m=+0.137525745 container attach 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:08:55 compute-0 sudo[77882]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:08:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:55 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 05:08:55 compute-0 sudo[78107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:55 compute-0 sudo[78107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:55 compute-0 sudo[78107]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:55 compute-0 sudo[78132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:08:55 compute-0 sudo[78132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:55 compute-0 sudo[78132]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:55 compute-0 sudo[78157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:55 compute-0 sudo[78157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:55 compute-0 sudo[78157]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:55 compute-0 sudo[78182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:08:55 compute-0 sudo[78182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:55 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78239 (sysctl)
Nov 29 05:08:55 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 05:08:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 29 05:08:55 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 05:08:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3994711497' entity='client.admin' 
Nov 29 05:08:55 compute-0 systemd[1]: libpod-597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769.scope: Deactivated successfully.
Nov 29 05:08:55 compute-0 podman[78055]: 2025-11-29 05:08:55.67606908 +0000 UTC m=+0.727893400 container died 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 05:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-63d401b04751ac85038805936015ac0c12b14bf8a5c7dea80b0b4a7e742ce552-merged.mount: Deactivated successfully.
Nov 29 05:08:55 compute-0 podman[78055]: 2025-11-29 05:08:55.71563832 +0000 UTC m=+0.767462640 container remove 597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769 (image=quay.io/ceph/ceph:v18, name=quirky_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 05:08:55 compute-0 systemd[1]: libpod-conmon-597a01afc8fc66529108ab34ba65758b9b6d66a9100cdd6d79c5764c28970769.scope: Deactivated successfully.
Nov 29 05:08:55 compute-0 podman[78256]: 2025-11-29 05:08:55.776251703 +0000 UTC m=+0.041800380 container create 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:08:55 compute-0 systemd[1]: Started libpod-conmon-08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6.scope.
Nov 29 05:08:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a8290e39549968f85b0b7399d1cb0f789187833b35bc77b37ebc3721fa7f6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a8290e39549968f85b0b7399d1cb0f789187833b35bc77b37ebc3721fa7f6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a8290e39549968f85b0b7399d1cb0f789187833b35bc77b37ebc3721fa7f6d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:55 compute-0 ceph-mon[75176]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:55 compute-0 ceph-mon[75176]: Saving service crash spec with placement *
Nov 29 05:08:55 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:55 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:55 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3994711497' entity='client.admin' 
Nov 29 05:08:55 compute-0 podman[78256]: 2025-11-29 05:08:55.759412053 +0000 UTC m=+0.024960720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:55 compute-0 podman[78256]: 2025-11-29 05:08:55.862509211 +0000 UTC m=+0.128057878 container init 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:08:55 compute-0 podman[78256]: 2025-11-29 05:08:55.86889529 +0000 UTC m=+0.134443937 container start 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:08:55 compute-0 podman[78256]: 2025-11-29 05:08:55.872742945 +0000 UTC m=+0.138291622 container attach 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:08:55 compute-0 sudo[78182]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:56 compute-0 sudo[78294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:56 compute-0 sudo[78294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:56 compute-0 sudo[78294]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:56 compute-0 sudo[78319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:08:56 compute-0 sudo[78319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:56 compute-0 sudo[78319]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:56 compute-0 sudo[78344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:56 compute-0 sudo[78344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:56 compute-0 sudo[78344]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:56 compute-0 sudo[78369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 05:08:56 compute-0 sudo[78369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:56 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 29 05:08:56 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:56 compute-0 sudo[78369]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:08:56 compute-0 systemd[1]: libpod-08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6.scope: Deactivated successfully.
Nov 29 05:08:56 compute-0 podman[78256]: 2025-11-29 05:08:56.398354215 +0000 UTC m=+0.663902862 container died 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:08:56 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4a8290e39549968f85b0b7399d1cb0f789187833b35bc77b37ebc3721fa7f6d-merged.mount: Deactivated successfully.
Nov 29 05:08:56 compute-0 podman[78256]: 2025-11-29 05:08:56.450586845 +0000 UTC m=+0.716135492 container remove 08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6 (image=quay.io/ceph/ceph:v18, name=modest_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:56 compute-0 systemd[1]: libpod-conmon-08d3003f9a1f1df7ee3a54280836bb3a422a3fcc9603a1af7edccee6e4bbfdc6.scope: Deactivated successfully.
Nov 29 05:08:56 compute-0 sudo[78439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:56 compute-0 sudo[78439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:56 compute-0 sudo[78439]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:56 compute-0 podman[78467]: 2025-11-29 05:08:56.507518066 +0000 UTC m=+0.038445976 container create 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:08:56 compute-0 sudo[78476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:08:56 compute-0 sudo[78476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:56 compute-0 sudo[78476]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:56 compute-0 systemd[1]: Started libpod-conmon-3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1.scope.
Nov 29 05:08:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb8e6572a1fffc581e0d7237afd8fdce45320eb02b5bed02f4104069fc8d816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb8e6572a1fffc581e0d7237afd8fdce45320eb02b5bed02f4104069fc8d816/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eb8e6572a1fffc581e0d7237afd8fdce45320eb02b5bed02f4104069fc8d816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:56 compute-0 sudo[78510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:08:56 compute-0 sudo[78510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:56 compute-0 sudo[78510]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:56 compute-0 podman[78467]: 2025-11-29 05:08:56.492984337 +0000 UTC m=+0.023912277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:56 compute-0 podman[78467]: 2025-11-29 05:08:56.59269649 +0000 UTC m=+0.123624410 container init 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:08:56 compute-0 podman[78467]: 2025-11-29 05:08:56.597840803 +0000 UTC m=+0.128768713 container start 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:08:56 compute-0 podman[78467]: 2025-11-29 05:08:56.600822639 +0000 UTC m=+0.131750579 container attach 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:08:56 compute-0 sudo[78539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- inventory --format=json-pretty --filter-for-batch
Nov 29 05:08:56 compute-0 sudo[78539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:08:56 compute-0 podman[78624]: 2025-11-29 05:08:56.948291981 +0000 UTC m=+0.036702998 container create 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:08:56 compute-0 systemd[1]: Started libpod-conmon-31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89.scope.
Nov 29 05:08:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:57 compute-0 podman[78624]: 2025-11-29 05:08:56.931490722 +0000 UTC m=+0.019901759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:08:57 compute-0 podman[78624]: 2025-11-29 05:08:57.028523855 +0000 UTC m=+0.116934892 container init 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:08:57 compute-0 podman[78624]: 2025-11-29 05:08:57.034748973 +0000 UTC m=+0.123159990 container start 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:08:57 compute-0 crazy_panini[78641]: 167 167
Nov 29 05:08:57 compute-0 systemd[1]: libpod-31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89.scope: Deactivated successfully.
Nov 29 05:08:57 compute-0 podman[78624]: 2025-11-29 05:08:57.038079656 +0000 UTC m=+0.126490693 container attach 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 05:08:57 compute-0 conmon[78641]: conmon 31b972cb37742d72368f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89.scope/container/memory.events
Nov 29 05:08:57 compute-0 podman[78624]: 2025-11-29 05:08:57.04190002 +0000 UTC m=+0.130311027 container died 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-74a79b72a9f62a6e5ab6e0f7ddff1fea0a95c791bbc56cbfe850ec76f3d1ed86-merged.mount: Deactivated successfully.
Nov 29 05:08:57 compute-0 podman[78624]: 2025-11-29 05:08:57.077918783 +0000 UTC m=+0.166329820 container remove 31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:08:57 compute-0 systemd[1]: libpod-conmon-31b972cb37742d72368f607f39823a56a92335af9888e1e6b46ecd3ef8c0bb89.scope: Deactivated successfully.
Nov 29 05:08:57 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 05:08:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:57 compute-0 ceph-mgr[75473]: [cephadm INFO root] Added label _admin to host compute-0
Nov 29 05:08:57 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 29 05:08:57 compute-0 flamboyant_lewin[78518]: Added label _admin to host compute-0
Nov 29 05:08:57 compute-0 systemd[1]: libpod-3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1.scope: Deactivated successfully.
Nov 29 05:08:57 compute-0 podman[78467]: 2025-11-29 05:08:57.138618057 +0000 UTC m=+0.669546007 container died 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 05:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eb8e6572a1fffc581e0d7237afd8fdce45320eb02b5bed02f4104069fc8d816-merged.mount: Deactivated successfully.
Nov 29 05:08:57 compute-0 podman[78467]: 2025-11-29 05:08:57.183708649 +0000 UTC m=+0.714636569 container remove 3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1 (image=quay.io/ceph/ceph:v18, name=flamboyant_lewin, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:08:57 compute-0 systemd[1]: libpod-conmon-3935cdd470b31c0596e635e758d776ae9f010ed8c3297c808f60d8dc150a82b1.scope: Deactivated successfully.
Nov 29 05:08:57 compute-0 podman[78673]: 2025-11-29 05:08:57.238563266 +0000 UTC m=+0.037229480 container create 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:08:57 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 05:08:57 compute-0 systemd[1]: Started libpod-conmon-91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6.scope.
Nov 29 05:08:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ce705471551ab2e37e1555362cda2cf65449f38d244f9081bdaaa23119a91d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ce705471551ab2e37e1555362cda2cf65449f38d244f9081bdaaa23119a91d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ce705471551ab2e37e1555362cda2cf65449f38d244f9081bdaaa23119a91d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:57 compute-0 podman[78673]: 2025-11-29 05:08:57.222801759 +0000 UTC m=+0.021467993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:57 compute-0 podman[78673]: 2025-11-29 05:08:57.322731987 +0000 UTC m=+0.121398281 container init 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:08:57 compute-0 podman[78673]: 2025-11-29 05:08:57.327692896 +0000 UTC m=+0.126359110 container start 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:08:57 compute-0 podman[78673]: 2025-11-29 05:08:57.331465909 +0000 UTC m=+0.130132123 container attach 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 29 05:08:57 compute-0 ceph-mon[75176]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:08:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 29 05:08:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2928868753' entity='client.admin' 
Nov 29 05:08:57 compute-0 systemd[1]: libpod-91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6.scope: Deactivated successfully.
Nov 29 05:08:57 compute-0 podman[78717]: 2025-11-29 05:08:57.90758592 +0000 UTC m=+0.029768896 container died 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:08:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-25ce705471551ab2e37e1555362cda2cf65449f38d244f9081bdaaa23119a91d-merged.mount: Deactivated successfully.
Nov 29 05:08:57 compute-0 podman[78717]: 2025-11-29 05:08:57.961562048 +0000 UTC m=+0.083745004 container remove 91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6 (image=quay.io/ceph/ceph:v18, name=quirky_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:57 compute-0 systemd[1]: libpod-conmon-91e4b5ef3e44c8662f1afb2a1a215b0838f1511d470059e8e8d9d16353c395b6.scope: Deactivated successfully.
Nov 29 05:08:58 compute-0 podman[78732]: 2025-11-29 05:08:58.064968502 +0000 UTC m=+0.062096577 container create e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 05:08:58 compute-0 systemd[1]: Started libpod-conmon-e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a.scope.
Nov 29 05:08:58 compute-0 podman[78732]: 2025-11-29 05:08:58.040738748 +0000 UTC m=+0.037866893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d343671a77f9facb3caf47362b55300a8b5e566705035510f86fefea2be1eb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d343671a77f9facb3caf47362b55300a8b5e566705035510f86fefea2be1eb3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d343671a77f9facb3caf47362b55300a8b5e566705035510f86fefea2be1eb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:58 compute-0 podman[78732]: 2025-11-29 05:08:58.159782087 +0000 UTC m=+0.156910222 container init e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:58 compute-0 podman[78732]: 2025-11-29 05:08:58.169516301 +0000 UTC m=+0.166644406 container start e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 05:08:58 compute-0 podman[78732]: 2025-11-29 05:08:58.173778114 +0000 UTC m=+0.170906219 container attach e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 05:08:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 29 05:08:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1442575940' entity='client.admin' 
Nov 29 05:08:58 compute-0 compassionate_wiles[78749]: set mgr/dashboard/cluster/status
Nov 29 05:08:58 compute-0 systemd[1]: libpod-e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a.scope: Deactivated successfully.
Nov 29 05:08:58 compute-0 podman[78732]: 2025-11-29 05:08:58.834154259 +0000 UTC m=+0.831282334 container died e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:08:58 compute-0 ceph-mon[75176]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:08:58 compute-0 ceph-mon[75176]: Added label _admin to host compute-0
Nov 29 05:08:58 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2928868753' entity='client.admin' 
Nov 29 05:08:58 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1442575940' entity='client.admin' 
Nov 29 05:08:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d343671a77f9facb3caf47362b55300a8b5e566705035510f86fefea2be1eb3-merged.mount: Deactivated successfully.
Nov 29 05:08:58 compute-0 podman[78732]: 2025-11-29 05:08:58.883181117 +0000 UTC m=+0.880309182 container remove e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a (image=quay.io/ceph/ceph:v18, name=compassionate_wiles, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:08:58 compute-0 systemd[1]: libpod-conmon-e58529be19ea9fb241317af2eab3196e2ebea79fdfc7e24deb05e1a7870c290a.scope: Deactivated successfully.
Nov 29 05:08:58 compute-0 sudo[74136]: pam_unix(sudo:session): session closed for user root
Nov 29 05:08:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:08:59 compute-0 podman[78794]: 2025-11-29 05:08:59.100105009 +0000 UTC m=+0.062056976 container create 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:08:59 compute-0 systemd[1]: Started libpod-conmon-4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9.scope.
Nov 29 05:08:59 compute-0 podman[78794]: 2025-11-29 05:08:59.073965873 +0000 UTC m=+0.035917890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:08:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3b65fe1cc0b2c06d4a8fcd4ac28b074134c71844905f85f61eac253bec1f28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3b65fe1cc0b2c06d4a8fcd4ac28b074134c71844905f85f61eac253bec1f28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3b65fe1cc0b2c06d4a8fcd4ac28b074134c71844905f85f61eac253bec1f28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3b65fe1cc0b2c06d4a8fcd4ac28b074134c71844905f85f61eac253bec1f28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:59 compute-0 podman[78794]: 2025-11-29 05:08:59.206232643 +0000 UTC m=+0.168184610 container init 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 05:08:59 compute-0 podman[78794]: 2025-11-29 05:08:59.220635889 +0000 UTC m=+0.182587826 container start 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:59 compute-0 podman[78794]: 2025-11-29 05:08:59.224562546 +0000 UTC m=+0.186514483 container attach 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:08:59 compute-0 ceph-mgr[75473]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 05:08:59 compute-0 sudo[78839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfkkxbofrmdekyoacoqwjgxnffmzaceb ; /usr/bin/python3'
Nov 29 05:08:59 compute-0 sudo[78839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:08:59 compute-0 python3[78841]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:08:59 compute-0 podman[78842]: 2025-11-29 05:08:59.586904556 +0000 UTC m=+0.080919431 container create 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:08:59 compute-0 podman[78842]: 2025-11-29 05:08:59.553871839 +0000 UTC m=+0.047886764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:08:59 compute-0 systemd[1]: Started libpod-conmon-60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f.scope.
Nov 29 05:08:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd788860352ad5b41e56aa95240c7584d28efb3bfe6d1c4a7d79d99b28de10/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ddd788860352ad5b41e56aa95240c7584d28efb3bfe6d1c4a7d79d99b28de10/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:08:59 compute-0 podman[78842]: 2025-11-29 05:08:59.743992341 +0000 UTC m=+0.238007276 container init 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:08:59 compute-0 podman[78842]: 2025-11-29 05:08:59.752045058 +0000 UTC m=+0.246059943 container start 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:08:59 compute-0 podman[78842]: 2025-11-29 05:08:59.756927645 +0000 UTC m=+0.250942590 container attach 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:09:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 29 05:09:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1523397030' entity='client.admin' 
Nov 29 05:09:00 compute-0 systemd[1]: libpod-60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f.scope: Deactivated successfully.
Nov 29 05:09:00 compute-0 podman[78842]: 2025-11-29 05:09:00.347532565 +0000 UTC m=+0.841547420 container died 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 05:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ddd788860352ad5b41e56aa95240c7584d28efb3bfe6d1c4a7d79d99b28de10-merged.mount: Deactivated successfully.
Nov 29 05:09:00 compute-0 podman[78842]: 2025-11-29 05:09:00.395445458 +0000 UTC m=+0.889460313 container remove 60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f (image=quay.io/ceph/ceph:v18, name=relaxed_morse, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:09:00 compute-0 systemd[1]: libpod-conmon-60747c8c6ddb025bec54928fd55af2a7889af6433454a5ab8ab433ee299c649f.scope: Deactivated successfully.
Nov 29 05:09:00 compute-0 sudo[78839]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]: [
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:     {
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:         "available": false,
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:         "ceph_device": false,
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:         "lsm_data": {},
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:         "lvs": [],
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:         "path": "/dev/sr0",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:         "rejected_reasons": [
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "Has a FileSystem",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "Insufficient space (<5GB)"
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:         ],
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:         "sys_api": {
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "actuators": null,
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "device_nodes": "sr0",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "devname": "sr0",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "human_readable_size": "482.00 KB",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "id_bus": "ata",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "model": "QEMU DVD-ROM",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "nr_requests": "2",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "parent": "/dev/sr0",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "partitions": {},
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "path": "/dev/sr0",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "removable": "1",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "rev": "2.5+",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "ro": "0",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "rotational": "1",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "sas_address": "",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "sas_device_handle": "",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "scheduler_mode": "mq-deadline",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "sectors": 0,
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "sectorsize": "2048",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "size": 493568.0,
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "support_discard": "2048",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "type": "disk",
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:             "vendor": "QEMU"
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:         }
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]:     }
Nov 29 05:09:00 compute-0 musing_varahamihira[78810]: ]
Nov 29 05:09:00 compute-0 systemd[1]: libpod-4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9.scope: Deactivated successfully.
Nov 29 05:09:00 compute-0 podman[78794]: 2025-11-29 05:09:00.661726865 +0000 UTC m=+1.623678802 container died 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 29 05:09:00 compute-0 systemd[1]: libpod-4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9.scope: Consumed 1.474s CPU time.
Nov 29 05:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f3b65fe1cc0b2c06d4a8fcd4ac28b074134c71844905f85f61eac253bec1f28-merged.mount: Deactivated successfully.
Nov 29 05:09:00 compute-0 podman[78794]: 2025-11-29 05:09:00.725926367 +0000 UTC m=+1.687878294 container remove 4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:09:00 compute-0 systemd[1]: libpod-conmon-4ca0ff947d4cab108f3963d9e67fefa65cba70100770002d96e48b5f48813de9.scope: Deactivated successfully.
Nov 29 05:09:00 compute-0 sudo[78539]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 05:09:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:09:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:00 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:09:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:09:00 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 05:09:00 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 05:09:00 compute-0 sudo[80818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:00 compute-0 sudo[80818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:00 compute-0 sudo[80818]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:00 compute-0 sudo[80871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 05:09:00 compute-0 sudo[80871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:00 compute-0 sudo[80871]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:00 compute-0 sudo[80915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:00 compute-0 sudo[80915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:00 compute-0 sudo[80915]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 sudo[80940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph
Nov 29 05:09:01 compute-0 sudo[80940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[80940]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 sudo[80965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:01 compute-0 sudo[80965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[80965]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 sudo[81013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph/ceph.conf.new
Nov 29 05:09:01 compute-0 sudo[81013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[81013]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 sudo[81062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:01 compute-0 sudo[81062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[81062]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 ceph-mgr[75473]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 29 05:09:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:01 compute-0 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 05:09:01 compute-0 sudo[81113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjvnzkpzpmkodqmlotnqcxcvrtljqfvm ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764392940.7766569-36445-157247356225760/async_wrapper.py j676110802398 30 /home/zuul/.ansible/tmp/ansible-tmp-1764392940.7766569-36445-157247356225760/AnsiballZ_command.py _'
Nov 29 05:09:01 compute-0 sudo[81113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:01 compute-0 sudo[81112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:09:01 compute-0 sudo[81112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[81112]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1523397030' entity='client.admin' 
Nov 29 05:09:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:09:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:09:01 compute-0 ceph-mon[75176]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 05:09:01 compute-0 ceph-mon[75176]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 05:09:01 compute-0 sudo[81140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:01 compute-0 sudo[81140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[81140]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 sudo[81165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph/ceph.conf.new
Nov 29 05:09:01 compute-0 sudo[81165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 ansible-async_wrapper.py[81126]: Invoked with j676110802398 30 /home/zuul/.ansible/tmp/ansible-tmp-1764392940.7766569-36445-157247356225760/AnsiballZ_command.py _
Nov 29 05:09:01 compute-0 sudo[81165]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 ansible-async_wrapper.py[81192]: Starting module and watcher
Nov 29 05:09:01 compute-0 ansible-async_wrapper.py[81192]: Start watching 81193 (30)
Nov 29 05:09:01 compute-0 ansible-async_wrapper.py[81193]: Start module (81193)
Nov 29 05:09:01 compute-0 ansible-async_wrapper.py[81126]: Return async_wrapper task started.
Nov 29 05:09:01 compute-0 sudo[81113]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 sudo[81218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:01 compute-0 sudo[81218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[81218]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 python3[81195]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:01 compute-0 sudo[81243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph/ceph.conf.new
Nov 29 05:09:01 compute-0 sudo[81243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[81243]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 podman[81266]: 2025-11-29 05:09:01.72900771 +0000 UTC m=+0.044283886 container create 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:09:01 compute-0 systemd[1]: Started libpod-conmon-88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735.scope.
Nov 29 05:09:01 compute-0 sudo[81278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:01 compute-0 sudo[81278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[81278]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73e4e7f968ca73f71fcf1081e0b2bf775174a9aaa4469e13f99e3431aef8ffc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b73e4e7f968ca73f71fcf1081e0b2bf775174a9aaa4469e13f99e3431aef8ffc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:01 compute-0 podman[81266]: 2025-11-29 05:09:01.708188582 +0000 UTC m=+0.023464658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:01 compute-0 podman[81266]: 2025-11-29 05:09:01.80544513 +0000 UTC m=+0.120721196 container init 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:09:01 compute-0 podman[81266]: 2025-11-29 05:09:01.81134239 +0000 UTC m=+0.126618436 container start 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:09:01 compute-0 podman[81266]: 2025-11-29 05:09:01.814580962 +0000 UTC m=+0.129857008 container attach 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:09:01 compute-0 sudo[81310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph/ceph.conf.new
Nov 29 05:09:01 compute-0 sudo[81310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[81310]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 sudo[81337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:01 compute-0 sudo[81337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[81337]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 sudo[81362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 29 05:09:01 compute-0 sudo[81362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:01 compute-0 sudo[81362]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:01 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf
Nov 29 05:09:01 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf
Nov 29 05:09:02 compute-0 sudo[81387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:02 compute-0 sudo[81387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81387]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config
Nov 29 05:09:02 compute-0 sudo[81412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81412]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:02 compute-0 sudo[81456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81456]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config
Nov 29 05:09:02 compute-0 sudo[81481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81481]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:02 compute-0 sudo[81506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:09:02 compute-0 compassionate_shannon[81307]: 
Nov 29 05:09:02 compute-0 compassionate_shannon[81307]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 05:09:02 compute-0 sudo[81506]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 ceph-mon[75176]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:02 compute-0 ceph-mon[75176]: Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf
Nov 29 05:09:02 compute-0 systemd[1]: libpod-88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735.scope: Deactivated successfully.
Nov 29 05:09:02 compute-0 podman[81266]: 2025-11-29 05:09:02.34374773 +0000 UTC m=+0.659023816 container died 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b73e4e7f968ca73f71fcf1081e0b2bf775174a9aaa4469e13f99e3431aef8ffc-merged.mount: Deactivated successfully.
Nov 29 05:09:02 compute-0 podman[81266]: 2025-11-29 05:09:02.387982113 +0000 UTC m=+0.703258159 container remove 88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735 (image=quay.io/ceph/ceph:v18, name=compassionate_shannon, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:09:02 compute-0 sudo[81533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf.new
Nov 29 05:09:02 compute-0 sudo[81533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 systemd[1]: libpod-conmon-88a104802654baa3aa0b87a9b761f951f65d7ec52e9ebe1528dedc48a3cc9735.scope: Deactivated successfully.
Nov 29 05:09:02 compute-0 sudo[81533]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 ansible-async_wrapper.py[81193]: Module complete (81193)
Nov 29 05:09:02 compute-0 sudo[81569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:02 compute-0 sudo[81569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81569]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:09:02 compute-0 sudo[81594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81594]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:02 compute-0 sudo[81619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81619]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf.new
Nov 29 05:09:02 compute-0 sudo[81667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81667]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:02 compute-0 sudo[81715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81715]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf.new
Nov 29 05:09:02 compute-0 sudo[81740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81740]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgmywufjlhfxouwyatxnwlxmdliqxeyp ; /usr/bin/python3'
Nov 29 05:09:02 compute-0 sudo[81788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:02 compute-0 sudo[81790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:02 compute-0 sudo[81790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81790]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf.new
Nov 29 05:09:02 compute-0 sudo[81816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81816]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:02 compute-0 sudo[81841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:02 compute-0 sudo[81841]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 python3[81793]: ansible-ansible.legacy.async_status Invoked with jid=j676110802398.81126 mode=status _async_dir=/root/.ansible_async
Nov 29 05:09:02 compute-0 sudo[81788]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:02 compute-0 sudo[81866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf.new /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.conf
Nov 29 05:09:02 compute-0 sudo[81866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[81866]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 05:09:03 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 05:09:03 compute-0 sudo[81904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:03 compute-0 sudo[81904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[81904]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 sudo[81970]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqzzprtzmulamccjpqqmwiiqslbfbyum ; /usr/bin/python3'
Nov 29 05:09:03 compute-0 sudo[81970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:03 compute-0 sudo[81957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 29 05:09:03 compute-0 sudo[81957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[81957]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 sudo[81990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:03 compute-0 sudo[81990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[81990]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 python3[81985]: ansible-ansible.legacy.async_status Invoked with jid=j676110802398.81126 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 05:09:03 compute-0 sudo[81970]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:03 compute-0 sudo[82015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph
Nov 29 05:09:03 compute-0 sudo[82015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[82015]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 ceph-mon[75176]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:09:03 compute-0 ceph-mon[75176]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 05:09:03 compute-0 sudo[82040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:03 compute-0 sudo[82040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[82040]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 sudo[82065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph/ceph.client.admin.keyring.new
Nov 29 05:09:03 compute-0 sudo[82065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[82065]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 sudo[82090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:03 compute-0 sudo[82090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[82090]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 sudo[82115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:09:03 compute-0 sudo[82115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[82115]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 sudo[82140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:03 compute-0 sudo[82140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[82140]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 sudo[82187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwgfcggxrshixscnurwvchwnsthswcit ; /usr/bin/python3'
Nov 29 05:09:03 compute-0 sudo[82187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:03 compute-0 sudo[82190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph/ceph.client.admin.keyring.new
Nov 29 05:09:03 compute-0 sudo[82190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[82190]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 python3[82193]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 05:09:03 compute-0 sudo[82187]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 sudo[82241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:03 compute-0 sudo[82241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[82241]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 sudo[82266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph/ceph.client.admin.keyring.new
Nov 29 05:09:03 compute-0 sudo[82266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[82266]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:03 compute-0 sudo[82291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:03 compute-0 sudo[82291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:03 compute-0 sudo[82291]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:04 compute-0 sudo[82316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph/ceph.client.admin.keyring.new
Nov 29 05:09:04 compute-0 sudo[82316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82316]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 sudo[82363]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdrytryoquntivbykwlwnblxggxwelut ; /usr/bin/python3'
Nov 29 05:09:04 compute-0 sudo[82363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:04 compute-0 sudo[82367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:04 compute-0 sudo[82367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82367]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 sudo[82392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 29 05:09:04 compute-0 sudo[82392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82392]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring
Nov 29 05:09:04 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring
Nov 29 05:09:04 compute-0 python3[82366]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:04 compute-0 sudo[82417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:04 compute-0 sudo[82417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82417]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 podman[82435]: 2025-11-29 05:09:04.316239613 +0000 UTC m=+0.053743053 container create 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:04 compute-0 ceph-mon[75176]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:04 compute-0 systemd[1]: Started libpod-conmon-9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8.scope.
Nov 29 05:09:04 compute-0 sudo[82452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config
Nov 29 05:09:04 compute-0 sudo[82452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82452]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 podman[82435]: 2025-11-29 05:09:04.291676323 +0000 UTC m=+0.029179743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c6a910ba99a3e9613dde96199ffdbe2ebe846ca528a7610309cda621c44a32/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c6a910ba99a3e9613dde96199ffdbe2ebe846ca528a7610309cda621c44a32/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38c6a910ba99a3e9613dde96199ffdbe2ebe846ca528a7610309cda621c44a32/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:04 compute-0 podman[82435]: 2025-11-29 05:09:04.414839612 +0000 UTC m=+0.152343062 container init 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:04 compute-0 podman[82435]: 2025-11-29 05:09:04.429363501 +0000 UTC m=+0.166866911 container start 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:09:04 compute-0 podman[82435]: 2025-11-29 05:09:04.432580433 +0000 UTC m=+0.170083853 container attach 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:04 compute-0 sudo[82484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:04 compute-0 sudo[82484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82484]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 sudo[82510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config
Nov 29 05:09:04 compute-0 sudo[82510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82510]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 sudo[82535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:04 compute-0 sudo[82535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82535]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 sudo[82560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring.new
Nov 29 05:09:04 compute-0 sudo[82560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82560]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 sudo[82585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:04 compute-0 sudo[82585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82585]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 sudo[82610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:09:04 compute-0 sudo[82610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82610]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 sudo[82654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:04 compute-0 sudo[82654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82654]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 sudo[82679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring.new
Nov 29 05:09:04 compute-0 sudo[82679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:04 compute-0 sudo[82679]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:04 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:09:04 compute-0 suspicious_heisenberg[82480]: 
Nov 29 05:09:04 compute-0 suspicious_heisenberg[82480]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 05:09:04 compute-0 sudo[82727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:04 compute-0 sudo[82727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:05 compute-0 sudo[82727]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:05 compute-0 systemd[1]: libpod-9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8.scope: Deactivated successfully.
Nov 29 05:09:05 compute-0 podman[82435]: 2025-11-29 05:09:05.005243098 +0000 UTC m=+0.742746508 container died 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 05:09:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-38c6a910ba99a3e9613dde96199ffdbe2ebe846ca528a7610309cda621c44a32-merged.mount: Deactivated successfully.
Nov 29 05:09:05 compute-0 podman[82435]: 2025-11-29 05:09:05.044312066 +0000 UTC m=+0.781815466 container remove 9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8 (image=quay.io/ceph/ceph:v18, name=suspicious_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 05:09:05 compute-0 systemd[1]: libpod-conmon-9a4249e24c76832cf0e1a352fdc0f6527399fb7f21857f14b83c0d9d79e3a6d8.scope: Deactivated successfully.
Nov 29 05:09:05 compute-0 sudo[82755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring.new
Nov 29 05:09:05 compute-0 sudo[82755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:05 compute-0 sudo[82755]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:05 compute-0 sudo[82363]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:05 compute-0 sudo[82789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:05 compute-0 sudo[82789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:05 compute-0 sudo[82789]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:05 compute-0 sudo[82814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring.new
Nov 29 05:09:05 compute-0 sudo[82814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:05 compute-0 sudo[82814]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:05 compute-0 sudo[82839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:05 compute-0 sudo[82839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:05 compute-0 sudo[82839]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:05 compute-0 sudo[82864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-93f82912-647c-5e78-b081-707d0a2966d8/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring.new /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring
Nov 29 05:09:05 compute-0 sudo[82864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:05 compute-0 sudo[82864]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:05 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:05 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:09:05 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:05 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev c8c28ab8-34ab-456a-b367-92efd2bc7176 (Updating crash deployment (+1 -> 1))
Nov 29 05:09:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 05:09:05 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 05:09:05 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 05:09:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:05 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:05 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 29 05:09:05 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 29 05:09:05 compute-0 sudo[82912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxigfifajiqnspvrdvmcogszuopezlux ; /usr/bin/python3'
Nov 29 05:09:05 compute-0 sudo[82912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:05 compute-0 ceph-mon[75176]: Updating compute-0:/var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/config/ceph.client.admin.keyring
Nov 29 05:09:05 compute-0 ceph-mon[75176]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:09:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 05:09:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 05:09:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:05 compute-0 sudo[82914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:05 compute-0 sudo[82914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:05 compute-0 sudo[82914]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:05 compute-0 sudo[82940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:05 compute-0 sudo[82940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:05 compute-0 sudo[82940]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:05 compute-0 python3[82915]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:05 compute-0 sudo[82965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:05 compute-0 sudo[82965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:05 compute-0 sudo[82965]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:05 compute-0 podman[82988]: 2025-11-29 05:09:05.520893539 +0000 UTC m=+0.034295136 container create 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:09:05 compute-0 sudo[82996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:09:05 compute-0 sudo[82996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:05 compute-0 systemd[1]: Started libpod-conmon-844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be.scope.
Nov 29 05:09:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f64e3fed0637c407ced8ce5644d2a8e5703abb845327cd7d80c54e48293e59/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f64e3fed0637c407ced8ce5644d2a8e5703abb845327cd7d80c54e48293e59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01f64e3fed0637c407ced8ce5644d2a8e5703abb845327cd7d80c54e48293e59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:05 compute-0 podman[82988]: 2025-11-29 05:09:05.58367555 +0000 UTC m=+0.097077157 container init 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 05:09:05 compute-0 podman[82988]: 2025-11-29 05:09:05.58957359 +0000 UTC m=+0.102975187 container start 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 05:09:05 compute-0 podman[82988]: 2025-11-29 05:09:05.592710128 +0000 UTC m=+0.106111785 container attach 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 05:09:05 compute-0 podman[82988]: 2025-11-29 05:09:05.507087385 +0000 UTC m=+0.020489002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:05 compute-0 podman[83075]: 2025-11-29 05:09:05.90376958 +0000 UTC m=+0.055539192 container create 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:05 compute-0 systemd[1]: Started libpod-conmon-703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e.scope.
Nov 29 05:09:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:05 compute-0 podman[83075]: 2025-11-29 05:09:05.875254573 +0000 UTC m=+0.027024195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:05 compute-0 podman[83075]: 2025-11-29 05:09:05.971054979 +0000 UTC m=+0.122824571 container init 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:05 compute-0 podman[83075]: 2025-11-29 05:09:05.977187285 +0000 UTC m=+0.128956887 container start 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:05 compute-0 jovial_goldstine[83102]: 167 167
Nov 29 05:09:05 compute-0 podman[83075]: 2025-11-29 05:09:05.981083641 +0000 UTC m=+0.132853253 container attach 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:09:05 compute-0 systemd[1]: libpod-703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e.scope: Deactivated successfully.
Nov 29 05:09:05 compute-0 podman[83075]: 2025-11-29 05:09:05.982099343 +0000 UTC m=+0.133868925 container died 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-567260f5ca1ee9b4c183996d0f609b7d6ec837785c65bd0ff4372fd61953c72f-merged.mount: Deactivated successfully.
Nov 29 05:09:06 compute-0 podman[83075]: 2025-11-29 05:09:06.023460933 +0000 UTC m=+0.175230545 container remove 703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldstine, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:09:06 compute-0 systemd[1]: libpod-conmon-703829874a8a0ed09470dfcad93362294f24f17cdcda5937a9eb30dba4404d4e.scope: Deactivated successfully.
Nov 29 05:09:06 compute-0 systemd[1]: Reloading.
Nov 29 05:09:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 29 05:09:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3938091443' entity='client.admin' 
Nov 29 05:09:06 compute-0 systemd-sysv-generator[83155]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:09:06 compute-0 systemd-rc-local-generator[83152]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:09:06 compute-0 podman[83170]: 2025-11-29 05:09:06.231679413 +0000 UTC m=+0.028436448 container died 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 05:09:06 compute-0 systemd[1]: libpod-844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be.scope: Deactivated successfully.
Nov 29 05:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-01f64e3fed0637c407ced8ce5644d2a8e5703abb845327cd7d80c54e48293e59-merged.mount: Deactivated successfully.
Nov 29 05:09:06 compute-0 systemd[1]: Reloading.
Nov 29 05:09:06 compute-0 podman[83170]: 2025-11-29 05:09:06.378520081 +0000 UTC m=+0.175277086 container remove 844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be (image=quay.io/ceph/ceph:v18, name=determined_sanderson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:09:06 compute-0 ceph-mon[75176]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:06 compute-0 ceph-mon[75176]: Deploying daemon crash.compute-0 on compute-0
Nov 29 05:09:06 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3938091443' entity='client.admin' 
Nov 29 05:09:06 compute-0 sudo[82912]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:06 compute-0 systemd-rc-local-generator[83212]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:09:06 compute-0 systemd-sysv-generator[83215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:09:06 compute-0 ansible-async_wrapper.py[81192]: Done in kid B.
Nov 29 05:09:06 compute-0 systemd[1]: libpod-conmon-844492c3e5675a789fef7b911e49d92e4a4a7e9b73164fc0e03a8302f52db5be.scope: Deactivated successfully.
Nov 29 05:09:06 compute-0 sudo[83248]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxepwdleaefhvlpaopxetqheggvukhxu ; /usr/bin/python3'
Nov 29 05:09:06 compute-0 sudo[83248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:06 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:09:06 compute-0 python3[83252]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:06 compute-0 podman[83290]: 2025-11-29 05:09:06.821167118 +0000 UTC m=+0.037843173 container create 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:06 compute-0 podman[83309]: 2025-11-29 05:09:06.854028 +0000 UTC m=+0.038948208 container create 8c3d78b4917452e35c779012a58365df10d8c285ce9bb130d5016a615a7cf08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 05:09:06 compute-0 systemd[1]: Started libpod-conmon-8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc.scope.
Nov 29 05:09:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a90778bd7df11ceb6a9579ba539f28d3355881ed914987fadda66f1b87ad957/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a90778bd7df11ceb6a9579ba539f28d3355881ed914987fadda66f1b87ad957/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a90778bd7df11ceb6a9579ba539f28d3355881ed914987fadda66f1b87ad957/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9779ea0bcfb8197bfe961392525116dd653fccb6b00ac6040da181fca873c77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9779ea0bcfb8197bfe961392525116dd653fccb6b00ac6040da181fca873c77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9779ea0bcfb8197bfe961392525116dd653fccb6b00ac6040da181fca873c77/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9779ea0bcfb8197bfe961392525116dd653fccb6b00ac6040da181fca873c77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:06 compute-0 podman[83290]: 2025-11-29 05:09:06.807127369 +0000 UTC m=+0.023803424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:06 compute-0 podman[83290]: 2025-11-29 05:09:06.908973419 +0000 UTC m=+0.125649514 container init 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:09:06 compute-0 podman[83309]: 2025-11-29 05:09:06.913500508 +0000 UTC m=+0.098420736 container init 8c3d78b4917452e35c779012a58365df10d8c285ce9bb130d5016a615a7cf08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 05:09:06 compute-0 podman[83290]: 2025-11-29 05:09:06.918981489 +0000 UTC m=+0.135657544 container start 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:06 compute-0 podman[83290]: 2025-11-29 05:09:06.922378234 +0000 UTC m=+0.139054299 container attach 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:06 compute-0 podman[83309]: 2025-11-29 05:09:06.922965426 +0000 UTC m=+0.107885644 container start 8c3d78b4917452e35c779012a58365df10d8c285ce9bb130d5016a615a7cf08f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:06 compute-0 bash[83309]: 8c3d78b4917452e35c779012a58365df10d8c285ce9bb130d5016a615a7cf08f
Nov 29 05:09:06 compute-0 podman[83309]: 2025-11-29 05:09:06.83857018 +0000 UTC m=+0.023490408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:06 compute-0 systemd[1]: Started Ceph crash.compute-0 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:09:06 compute-0 sudo[82996]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 05:09:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:06 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev c8c28ab8-34ab-456a-b367-92efd2bc7176 (Updating crash deployment (+1 -> 1))
Nov 29 05:09:06 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event c8c28ab8-34ab-456a-b367-92efd2bc7176 (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 29 05:09:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 05:09:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:07 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 7264640e-8713-4381-9094-d38af8c362b6 does not exist
Nov 29 05:09:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 05:09:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:07 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 34b6a833-63d3-45c9-995f-22c48b727833 (Updating mgr deployment (+1 -> 2))
Nov 29 05:09:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhpwsh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 05:09:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhpwsh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 05:09:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhpwsh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 05:09:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 05:09:07 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:09:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:07 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:07 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.hhpwsh on compute-0
Nov 29 05:09:07 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.hhpwsh on compute-0
Nov 29 05:09:07 compute-0 sudo[83341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:07 compute-0 sudo[83341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:07 compute-0 sudo[83341]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:07 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 05:09:07 compute-0 sudo[83366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:07 compute-0 sudo[83366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:07 compute-0 sudo[83366]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:07 compute-0 sudo[83393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:07 compute-0 sudo[83393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:07 compute-0 sudo[83393]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:07 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.284+0000 7fcee0c50640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 05:09:07 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.284+0000 7fcee0c50640 -1 AuthRegistry(0x7fcedc066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 05:09:07 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.285+0000 7fcee0c50640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 05:09:07 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.285+0000 7fcee0c50640 -1 AuthRegistry(0x7fcee0c4f000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 05:09:07 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.286+0000 7fceda575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 05:09:07 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: 2025-11-29T05:09:07.286+0000 7fcee0c50640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 05:09:07 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 05:09:07 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-crash-compute-0[83335]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 05:09:07 compute-0 sudo[83437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:09:07 compute-0 sudo[83437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 29 05:09:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1060990233' entity='client.admin' 
Nov 29 05:09:07 compute-0 systemd[1]: libpod-8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc.scope: Deactivated successfully.
Nov 29 05:09:07 compute-0 podman[83490]: 2025-11-29 05:09:07.527335609 +0000 UTC m=+0.030681165 container died 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a90778bd7df11ceb6a9579ba539f28d3355881ed914987fadda66f1b87ad957-merged.mount: Deactivated successfully.
Nov 29 05:09:07 compute-0 podman[83490]: 2025-11-29 05:09:07.565295204 +0000 UTC m=+0.068640740 container remove 8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc (image=quay.io/ceph/ceph:v18, name=hopeful_ritchie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:07 compute-0 systemd[1]: libpod-conmon-8f58e7fe44f22a22a8a8d96d4fd84cd8a7e9715fee86b1bada1c5829683ecabc.scope: Deactivated successfully.
Nov 29 05:09:07 compute-0 sudo[83248]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:07 compute-0 podman[83529]: 2025-11-29 05:09:07.615255253 +0000 UTC m=+0.037898254 container create f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:09:07 compute-0 systemd[1]: Started libpod-conmon-f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e.scope.
Nov 29 05:09:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:07 compute-0 podman[83529]: 2025-11-29 05:09:07.686669854 +0000 UTC m=+0.109312875 container init f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:09:07 compute-0 podman[83529]: 2025-11-29 05:09:07.691991581 +0000 UTC m=+0.114634592 container start f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:09:07 compute-0 podman[83529]: 2025-11-29 05:09:07.597306468 +0000 UTC m=+0.019949499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:07 compute-0 elated_haibt[83546]: 167 167
Nov 29 05:09:07 compute-0 podman[83529]: 2025-11-29 05:09:07.69514405 +0000 UTC m=+0.117787051 container attach f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 05:09:07 compute-0 systemd[1]: libpod-f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e.scope: Deactivated successfully.
Nov 29 05:09:07 compute-0 podman[83529]: 2025-11-29 05:09:07.698029794 +0000 UTC m=+0.120672805 container died f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-89759b840b21f40c5678935a0727a3885eba667973ad162da6f8eed3d31efab9-merged.mount: Deactivated successfully.
Nov 29 05:09:07 compute-0 podman[83529]: 2025-11-29 05:09:07.733552885 +0000 UTC m=+0.156195886 container remove f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_haibt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:09:07 compute-0 systemd[1]: libpod-conmon-f5caf00659924f7214df4c56bd45668414a46815192d0840fb15eb88578a984e.scope: Deactivated successfully.
Nov 29 05:09:07 compute-0 systemd[1]: Reloading.
Nov 29 05:09:07 compute-0 systemd-rc-local-generator[83612]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:09:07 compute-0 systemd-sysv-generator[83616]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:09:08 compute-0 sudo[83587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjfsgexmbdahdqohzdqfxkqlspmfyrhm ; /usr/bin/python3'
Nov 29 05:09:08 compute-0 sudo[83587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:08 compute-0 systemd[1]: Reloading.
Nov 29 05:09:08 compute-0 systemd-rc-local-generator[83653]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:09:08 compute-0 systemd-sysv-generator[83656]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:09:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhpwsh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 05:09:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.hhpwsh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 05:09:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:09:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:08 compute-0 ceph-mon[75176]: Deploying daemon mgr.compute-0.hhpwsh on compute-0
Nov 29 05:09:08 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1060990233' entity='client.admin' 
Nov 29 05:09:08 compute-0 python3[83626]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:08 compute-0 podman[83665]: 2025-11-29 05:09:08.364184576 +0000 UTC m=+0.042071527 container create e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:08 compute-0 systemd[1]: Started libpod-conmon-e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da.scope.
Nov 29 05:09:08 compute-0 systemd[1]: Starting Ceph mgr.compute-0.hhpwsh for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:09:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ea22b3a1bcdb1e13b1b85a634aaafe7dbf0e0b2acc2a47e803f8ff23be0bb4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ea22b3a1bcdb1e13b1b85a634aaafe7dbf0e0b2acc2a47e803f8ff23be0bb4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ea22b3a1bcdb1e13b1b85a634aaafe7dbf0e0b2acc2a47e803f8ff23be0bb4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:08 compute-0 podman[83665]: 2025-11-29 05:09:08.431813263 +0000 UTC m=+0.109700234 container init e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:08 compute-0 podman[83665]: 2025-11-29 05:09:08.345574756 +0000 UTC m=+0.023461757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:08 compute-0 podman[83665]: 2025-11-29 05:09:08.443682613 +0000 UTC m=+0.121569584 container start e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:08 compute-0 podman[83665]: 2025-11-29 05:09:08.450592375 +0000 UTC m=+0.128479346 container attach e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:09:08 compute-0 podman[83734]: 2025-11-29 05:09:08.592439556 +0000 UTC m=+0.036203148 container create dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fdf782b7ee3dd7a8bc250cb825eabc52947bc342f3279d3fc3e6d86f773b7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fdf782b7ee3dd7a8bc250cb825eabc52947bc342f3279d3fc3e6d86f773b7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fdf782b7ee3dd7a8bc250cb825eabc52947bc342f3279d3fc3e6d86f773b7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fdf782b7ee3dd7a8bc250cb825eabc52947bc342f3279d3fc3e6d86f773b7e/merged/var/lib/ceph/mgr/ceph-compute-0.hhpwsh supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:08 compute-0 podman[83734]: 2025-11-29 05:09:08.575465783 +0000 UTC m=+0.019229395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:08 compute-0 podman[83734]: 2025-11-29 05:09:08.675924902 +0000 UTC m=+0.119688514 container init dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 05:09:08 compute-0 podman[83734]: 2025-11-29 05:09:08.680574394 +0000 UTC m=+0.124337996 container start dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:08 compute-0 bash[83734]: dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3
Nov 29 05:09:08 compute-0 systemd[1]: Started Ceph mgr.compute-0.hhpwsh for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:09:08 compute-0 ceph-mgr[83753]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 05:09:08 compute-0 ceph-mgr[83753]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 05:09:08 compute-0 ceph-mgr[83753]: pidfile_write: ignore empty --pid-file
Nov 29 05:09:08 compute-0 sudo[83437]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:08 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:08 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 05:09:08 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:08 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 34b6a833-63d3-45c9-995f-22c48b727833 (Updating mgr deployment (+1 -> 2))
Nov 29 05:09:08 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 34b6a833-63d3-45c9-995f-22c48b727833 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 29 05:09:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 05:09:08 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:08 compute-0 sudo[83797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:08 compute-0 sudo[83797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:08 compute-0 sudo[83797]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:08 compute-0 ceph-mgr[83753]: mgr[py] Loading python module 'alerts'
Nov 29 05:09:08 compute-0 sudo[83822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:09:08 compute-0 sudo[83822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:08 compute-0 sudo[83822]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:08 compute-0 sudo[83847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:08 compute-0 sudo[83847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:08 compute-0 sudo[83847]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:08 compute-0 sudo[83872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:08 compute-0 sudo[83872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:08 compute-0 sudo[83872]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 29 05:09:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/395037058' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 05:09:09 compute-0 sudo[83897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:09 compute-0 sudo[83897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:09 compute-0 sudo[83897]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:09 compute-0 sudo[83923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:09:09 compute-0 sudo[83923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:09 compute-0 ceph-mgr[83753]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 05:09:09 compute-0 ceph-mgr[83753]: mgr[py] Loading python module 'balancer'
Nov 29 05:09:09 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:09.148+0000 7f2f6405a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 05:09:09 compute-0 ceph-mon[75176]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:09 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/395037058' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 05:09:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:09 compute-0 ceph-mgr[83753]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 05:09:09 compute-0 ceph-mgr[83753]: mgr[py] Loading python module 'cephadm'
Nov 29 05:09:09 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:09.412+0000 7f2f6405a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 05:09:09 compute-0 podman[84022]: 2025-11-29 05:09:09.570111939 +0000 UTC m=+0.046005133 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 05:09:09 compute-0 podman[84022]: 2025-11-29 05:09:09.660663341 +0000 UTC m=+0.136556545 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 29 05:09:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:09:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/395037058' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 05:09:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 05:09:09 compute-0 pensive_feynman[83682]: set require_min_compat_client to mimic
Nov 29 05:09:09 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 29 05:09:09 compute-0 systemd[1]: libpod-e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da.scope: Deactivated successfully.
Nov 29 05:09:09 compute-0 podman[83665]: 2025-11-29 05:09:09.788536233 +0000 UTC m=+1.466423184 container died e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 05:09:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7ea22b3a1bcdb1e13b1b85a634aaafe7dbf0e0b2acc2a47e803f8ff23be0bb4-merged.mount: Deactivated successfully.
Nov 29 05:09:09 compute-0 podman[83665]: 2025-11-29 05:09:09.837256725 +0000 UTC m=+1.515143676 container remove e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da (image=quay.io/ceph/ceph:v18, name=pensive_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:09:09 compute-0 systemd[1]: libpod-conmon-e116e89959ce45d9901ea1cd4233682f7e4d7c818efa72a9d888992c13d071da.scope: Deactivated successfully.
Nov 29 05:09:09 compute-0 sudo[83587]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:09 compute-0 sudo[83923]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 0ebbf7a5-0723-4ea6-81f8-5f0c7cd40eb3 does not exist
Nov 29 05:09:10 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 123c56fd-4de1-4d12-a0f4-49efb944cb56 does not exist
Nov 29 05:09:10 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 0f4d22fb-1a54-4dd4-9c44-303b731334cb does not exist
Nov 29 05:09:10 compute-0 sudo[84125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:10 compute-0 sudo[84125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:10 compute-0 sudo[84125]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:10 compute-0 sudo[84150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:09:10 compute-0 sudo[84150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:10 compute-0 sudo[84150]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 05:09:10 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 05:09:10 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 05:09:10 compute-0 sudo[84175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:10 compute-0 sudo[84175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:10 compute-0 sudo[84175]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:10 compute-0 sudo[84200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:10 compute-0 sudo[84200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:10 compute-0 sudo[84200]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:10 compute-0 sudo[84225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:10 compute-0 sudo[84225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:10 compute-0 sudo[84225]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:10 compute-0 sudo[84278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thwvoddbzuozbymikimirbpsprzjxbjm ; /usr/bin/python3'
Nov 29 05:09:10 compute-0 sudo[84278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:10 compute-0 sudo[84271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:09:10 compute-0 sudo[84271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:10 compute-0 python3[84297]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:10 compute-0 podman[84303]: 2025-11-29 05:09:10.5126937 +0000 UTC m=+0.036873561 container create 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:09:10 compute-0 systemd[1]: Started libpod-conmon-4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b.scope.
Nov 29 05:09:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58b01bbb21f2cf8750614192f0b00aceb870828c44254af3dc7e919942c401f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58b01bbb21f2cf8750614192f0b00aceb870828c44254af3dc7e919942c401f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f58b01bbb21f2cf8750614192f0b00aceb870828c44254af3dc7e919942c401f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:10 compute-0 podman[84329]: 2025-11-29 05:09:10.590774128 +0000 UTC m=+0.043161841 container create 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 05:09:10 compute-0 podman[84303]: 2025-11-29 05:09:10.496130356 +0000 UTC m=+0.020310237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:10 compute-0 podman[84303]: 2025-11-29 05:09:10.596397922 +0000 UTC m=+0.120577803 container init 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:10 compute-0 podman[84303]: 2025-11-29 05:09:10.603705742 +0000 UTC m=+0.127885603 container start 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:10 compute-0 podman[84303]: 2025-11-29 05:09:10.608500248 +0000 UTC m=+0.132680129 container attach 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:10 compute-0 systemd[1]: Started libpod-conmon-17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221.scope.
Nov 29 05:09:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:10 compute-0 podman[84329]: 2025-11-29 05:09:10.573983818 +0000 UTC m=+0.026371551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:10 compute-0 podman[84329]: 2025-11-29 05:09:10.676440932 +0000 UTC m=+0.128828665 container init 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 05:09:10 compute-0 podman[84329]: 2025-11-29 05:09:10.681700938 +0000 UTC m=+0.134088661 container start 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:09:10 compute-0 laughing_brahmagupta[84351]: 167 167
Nov 29 05:09:10 compute-0 podman[84329]: 2025-11-29 05:09:10.68546798 +0000 UTC m=+0.137855713 container attach 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:10 compute-0 systemd[1]: libpod-17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221.scope: Deactivated successfully.
Nov 29 05:09:10 compute-0 podman[84329]: 2025-11-29 05:09:10.686047443 +0000 UTC m=+0.138435156 container died 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-01e4deca56bd8709fe3eec8b4c834483840e668f9a0f25ed2d0f2c2e7f480dbd-merged.mount: Deactivated successfully.
Nov 29 05:09:10 compute-0 podman[84329]: 2025-11-29 05:09:10.735583602 +0000 UTC m=+0.187971325 container remove 17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 29 05:09:10 compute-0 systemd[1]: libpod-conmon-17cca17ef20604ce5d9f7274e660cdcd8186459cc8f1a23f2458f49bca1c2221.scope: Deactivated successfully.
Nov 29 05:09:10 compute-0 sudo[84271]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:10 compute-0 ceph-mon[75176]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/395037058' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 05:09:10 compute-0 ceph-mon[75176]: osdmap e3: 0 total, 0 up, 0 in
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:10 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.csskcz (unknown last config time)...
Nov 29 05:09:10 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.csskcz (unknown last config time)...
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.csskcz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.csskcz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:10 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:10 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.csskcz on compute-0
Nov 29 05:09:10 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.csskcz on compute-0
Nov 29 05:09:10 compute-0 sudo[84374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:10 compute-0 sudo[84374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:10 compute-0 sudo[84374]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:10 compute-0 sudo[84406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:10 compute-0 sudo[84406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:10 compute-0 sudo[84406]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:10 compute-0 sudo[84450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:10 compute-0 sudo[84450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:10 compute-0 sudo[84450]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:11 compute-0 sudo[84475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:09:11 compute-0 sudo[84475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:09:11 compute-0 sudo[84501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:11 compute-0 sudo[84501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:11 compute-0 sudo[84501]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:11 compute-0 sudo[84538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:11 compute-0 sudo[84538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:11 compute-0 sudo[84538]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:11 compute-0 sudo[84574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:11 compute-0 sudo[84574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:11 compute-0 sudo[84574]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:11 compute-0 podman[84565]: 2025-11-29 05:09:11.312537912 +0000 UTC m=+0.051897283 container create 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [progress INFO root] Writing back 2 completed events
Nov 29 05:09:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 05:09:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 systemd[1]: Started libpod-conmon-3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1.scope.
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:09:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:11 compute-0 sudo[84606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 29 05:09:11 compute-0 sudo[84606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:11 compute-0 podman[84565]: 2025-11-29 05:09:11.283697738 +0000 UTC m=+0.023057109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:11 compute-0 podman[84565]: 2025-11-29 05:09:11.384545056 +0000 UTC m=+0.123904397 container init 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:11 compute-0 ceph-mgr[83753]: mgr[py] Loading python module 'crash'
Nov 29 05:09:11 compute-0 podman[84565]: 2025-11-29 05:09:11.389171698 +0000 UTC m=+0.128531019 container start 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:09:11 compute-0 podman[84565]: 2025-11-29 05:09:11.391925798 +0000 UTC m=+0.131285119 container attach 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 05:09:11 compute-0 fervent_wilson[84629]: 167 167
Nov 29 05:09:11 compute-0 systemd[1]: libpod-3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1.scope: Deactivated successfully.
Nov 29 05:09:11 compute-0 podman[84639]: 2025-11-29 05:09:11.430455386 +0000 UTC m=+0.027235390 container died 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-29e148a15b676668f4765b347ae846124c7042b332300a81eb16be1d072c1c02-merged.mount: Deactivated successfully.
Nov 29 05:09:11 compute-0 podman[84639]: 2025-11-29 05:09:11.471392546 +0000 UTC m=+0.068172510 container remove 3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:09:11 compute-0 systemd[1]: libpod-conmon-3c236c9ba67e9b52e3e45dd5a8b7a3ae9714acae54898ed76959b7a687f09fb1.scope: Deactivated successfully.
Nov 29 05:09:11 compute-0 sudo[84475]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 sudo[84671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:11 compute-0 sudo[84671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:11 compute-0 sudo[84671]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:11 compute-0 sudo[84606]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 05:09:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 05:09:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 05:09:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 05:09:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [cephadm INFO root] Added host compute-0
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 29 05:09:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 05:09:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 29 05:09:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 05:09:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 29 05:09:11 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 29 05:09:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 29 05:09:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 sweet_stonebraker[84339]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 05:09:11 compute-0 sweet_stonebraker[84339]: Scheduled mon update...
Nov 29 05:09:11 compute-0 sweet_stonebraker[84339]: Scheduled mgr update...
Nov 29 05:09:11 compute-0 sweet_stonebraker[84339]: Scheduled osd.default_drive_group update...
Nov 29 05:09:11 compute-0 sudo[84700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:11 compute-0 sudo[84700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:11 compute-0 sudo[84700]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:11 compute-0 ceph-mgr[83753]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 05:09:11 compute-0 ceph-mgr[83753]: mgr[py] Loading python module 'dashboard'
Nov 29 05:09:11 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:11.679+0000 7f2f6405a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 05:09:11 compute-0 systemd[1]: libpod-4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b.scope: Deactivated successfully.
Nov 29 05:09:11 compute-0 podman[84303]: 2025-11-29 05:09:11.684310009 +0000 UTC m=+1.208489870 container died 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f58b01bbb21f2cf8750614192f0b00aceb870828c44254af3dc7e919942c401f-merged.mount: Deactivated successfully.
Nov 29 05:09:11 compute-0 sudo[84727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:11 compute-0 sudo[84727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:11 compute-0 podman[84303]: 2025-11-29 05:09:11.742672163 +0000 UTC m=+1.266852034 container remove 4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b (image=quay.io/ceph/ceph:v18, name=sweet_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 05:09:11 compute-0 sudo[84727]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:11 compute-0 systemd[1]: libpod-conmon-4e78eb016aaa290f24cacf3409244666c91a0314683fceacee434916cd20fb4b.scope: Deactivated successfully.
Nov 29 05:09:11 compute-0 sudo[84278]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:11 compute-0 ceph-mon[75176]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 05:09:11 compute-0 ceph-mon[75176]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: Reconfiguring mgr.compute-0.csskcz (unknown last config time)...
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.csskcz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:11 compute-0 ceph-mon[75176]: Reconfiguring daemon mgr.compute-0.csskcz on compute-0
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:11 compute-0 sudo[84765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:09:11 compute-0 sudo[84765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:12 compute-0 sudo[84837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kntsswsknqvluqkykiizidwabfgnngvr ; /usr/bin/python3'
Nov 29 05:09:12 compute-0 sudo[84837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:12 compute-0 python3[84843]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:12 compute-0 podman[84873]: 2025-11-29 05:09:12.250964523 +0000 UTC m=+0.060322088 container create 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:12 compute-0 systemd[1]: Started libpod-conmon-8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8.scope.
Nov 29 05:09:12 compute-0 podman[84873]: 2025-11-29 05:09:12.226067705 +0000 UTC m=+0.035425320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080f92fd69b369a7007405cf4b3a05822c35266256de63fd8550a3310606ba7f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080f92fd69b369a7007405cf4b3a05822c35266256de63fd8550a3310606ba7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080f92fd69b369a7007405cf4b3a05822c35266256de63fd8550a3310606ba7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:12 compute-0 podman[84873]: 2025-11-29 05:09:12.375218045 +0000 UTC m=+0.184575600 container init 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:12 compute-0 podman[84873]: 2025-11-29 05:09:12.388348904 +0000 UTC m=+0.197706429 container start 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 05:09:12 compute-0 podman[84902]: 2025-11-29 05:09:12.389670913 +0000 UTC m=+0.091835201 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:12 compute-0 podman[84873]: 2025-11-29 05:09:12.408461906 +0000 UTC m=+0.217819471 container attach 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:09:12 compute-0 podman[84902]: 2025-11-29 05:09:12.490204444 +0000 UTC m=+0.192368692 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:09:12 compute-0 sudo[84765]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:12 compute-0 ceph-mon[75176]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:09:12 compute-0 ceph-mon[75176]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:12 compute-0 ceph-mon[75176]: Added host compute-0
Nov 29 05:09:12 compute-0 ceph-mon[75176]: Saving service mon spec with placement compute-0
Nov 29 05:09:12 compute-0 ceph-mon[75176]: Saving service mgr spec with placement compute-0
Nov 29 05:09:12 compute-0 ceph-mon[75176]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 05:09:12 compute-0 ceph-mon[75176]: Saving service osd.default_drive_group spec with placement compute-0
Nov 29 05:09:12 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 05:09:12 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3124182859' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 05:09:12 compute-0 hopeful_brattain[84909]: 
Nov 29 05:09:12 compute-0 hopeful_brattain[84909]: {"fsid":"93f82912-647c-5e78-b081-707d0a2966d8","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":78,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-29T05:07:51.349368+0000","services":{}},"progress_events":{}}
Nov 29 05:09:12 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:12 compute-0 systemd[1]: libpod-8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8.scope: Deactivated successfully.
Nov 29 05:09:12 compute-0 podman[84873]: 2025-11-29 05:09:12.989662349 +0000 UTC m=+0.799019884 container died 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 05:09:12 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:09:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:09:13 compute-0 ceph-mgr[83753]: mgr[py] Loading python module 'devicehealth'
Nov 29 05:09:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:09:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-080f92fd69b369a7007405cf4b3a05822c35266256de63fd8550a3310606ba7f-merged.mount: Deactivated successfully.
Nov 29 05:09:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:13 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d119416c-83f9-4b08-ba71-df4ca0804aea does not exist
Nov 29 05:09:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 05:09:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:13 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 1aa60e2d-3d9a-44de-86ca-53819f4dae3f (Updating mgr deployment (-1 -> 1))
Nov 29 05:09:13 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.hhpwsh from compute-0 -- ports [8765]
Nov 29 05:09:13 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.hhpwsh from compute-0 -- ports [8765]
Nov 29 05:09:13 compute-0 podman[84873]: 2025-11-29 05:09:13.165014426 +0000 UTC m=+0.974371961 container remove 8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8 (image=quay.io/ceph/ceph:v18, name=hopeful_brattain, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:13 compute-0 systemd[1]: libpod-conmon-8026be74d5d9ba2c3551ad76499ec86c899b54818ee124bf19bcb0cc8cf7edb8.scope: Deactivated successfully.
Nov 29 05:09:13 compute-0 sudo[84837]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:13 compute-0 sudo[85026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:13 compute-0 sudo[85026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:13 compute-0 sudo[85026]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:13 compute-0 sudo[85051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:13 compute-0 sudo[85051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:13 compute-0 sudo[85051]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:13 compute-0 sudo[85076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:13 compute-0 sudo[85076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:13 compute-0 sudo[85076]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:13 compute-0 ceph-mgr[83753]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 05:09:13 compute-0 ceph-mgr[83753]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 05:09:13 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:13.363+0000 7f2f6405a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 05:09:13 compute-0 sudo[85101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --name mgr.compute-0.hhpwsh --force --tcp-ports 8765
Nov 29 05:09:13 compute-0 sudo[85101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:13 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.hhpwsh for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:09:13 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 05:09:13 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 05:09:13 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]:   from numpy import show_config as show_numpy_config
Nov 29 05:09:13 compute-0 ceph-mgr[83753]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 05:09:13 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:13.886+0000 7f2f6405a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 05:09:13 compute-0 ceph-mgr[83753]: mgr[py] Loading python module 'influx'
Nov 29 05:09:14 compute-0 ceph-mgr[83753]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 05:09:14 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh[83749]: 2025-11-29T05:09:14.112+0000 7f2f6405a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 05:09:14 compute-0 ceph-mgr[83753]: mgr[py] Loading python module 'insights'
Nov 29 05:09:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3124182859' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 05:09:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:09:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:14 compute-0 podman[85196]: 2025-11-29 05:09:14.179652702 +0000 UTC m=+0.298499796 container died dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 29 05:09:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-94fdf782b7ee3dd7a8bc250cb825eabc52947bc342f3279d3fc3e6d86f773b7e-merged.mount: Deactivated successfully.
Nov 29 05:09:14 compute-0 podman[85196]: 2025-11-29 05:09:14.453254481 +0000 UTC m=+0.572101535 container remove dce45f3e8ad19d82a86b7f1cde9a3e93c577d1921702fe43080298ef19a5b4a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:14 compute-0 bash[85196]: ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-hhpwsh
Nov 29 05:09:14 compute-0 systemd[1]: ceph-93f82912-647c-5e78-b081-707d0a2966d8@mgr.compute-0.hhpwsh.service: Main process exited, code=exited, status=143/n/a
Nov 29 05:09:14 compute-0 systemd[1]: ceph-93f82912-647c-5e78-b081-707d0a2966d8@mgr.compute-0.hhpwsh.service: Failed with result 'exit-code'.
Nov 29 05:09:14 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.hhpwsh for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:09:14 compute-0 systemd[1]: ceph-93f82912-647c-5e78-b081-707d0a2966d8@mgr.compute-0.hhpwsh.service: Consumed 6.334s CPU time.
Nov 29 05:09:14 compute-0 systemd[1]: Reloading.
Nov 29 05:09:14 compute-0 systemd-rc-local-generator[85279]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:09:14 compute-0 systemd-sysv-generator[85284]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:09:14 compute-0 sudo[85101]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:14 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.hhpwsh
Nov 29 05:09:14 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.hhpwsh
Nov 29 05:09:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.hhpwsh"} v 0) v1
Nov 29 05:09:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.hhpwsh"}]: dispatch
Nov 29 05:09:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.hhpwsh"}]': finished
Nov 29 05:09:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 05:09:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:14 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 1aa60e2d-3d9a-44de-86ca-53819f4dae3f (Updating mgr deployment (-1 -> 1))
Nov 29 05:09:14 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 1aa60e2d-3d9a-44de-86ca-53819f4dae3f (Updating mgr deployment (-1 -> 1)) in 2 seconds
Nov 29 05:09:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 05:09:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:14 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 73e4e3c7-b0b6-4dbe-b5eb-72e5e9dd87c8 does not exist
Nov 29 05:09:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:09:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:09:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:09:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:09:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:15 compute-0 sudo[85295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:15 compute-0 sudo[85295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:15 compute-0 sudo[85295]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:15 compute-0 sudo[85320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:15 compute-0 sudo[85320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:15 compute-0 sudo[85320]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:15 compute-0 ceph-mon[75176]: Removing daemon mgr.compute-0.hhpwsh from compute-0 -- ports [8765]
Nov 29 05:09:15 compute-0 ceph-mon[75176]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:15 compute-0 ceph-mon[75176]: Removing key for mgr.compute-0.hhpwsh
Nov 29 05:09:15 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.hhpwsh"}]: dispatch
Nov 29 05:09:15 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.hhpwsh"}]': finished
Nov 29 05:09:15 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:15 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:15 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:09:15 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:09:15 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:15 compute-0 sudo[85345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:15 compute-0 sudo[85345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:15 compute-0 sudo[85345]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:15 compute-0 sudo[85370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:09:15 compute-0 sudo[85370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:15 compute-0 podman[85437]: 2025-11-29 05:09:15.581976586 +0000 UTC m=+0.042386394 container create e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:15 compute-0 systemd[1]: Started libpod-conmon-e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25.scope.
Nov 29 05:09:15 compute-0 podman[85437]: 2025-11-29 05:09:15.562139439 +0000 UTC m=+0.022549207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:15 compute-0 podman[85437]: 2025-11-29 05:09:15.678493499 +0000 UTC m=+0.138903307 container init e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:09:15 compute-0 podman[85437]: 2025-11-29 05:09:15.685145474 +0000 UTC m=+0.145555262 container start e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:15 compute-0 podman[85437]: 2025-11-29 05:09:15.688493879 +0000 UTC m=+0.148903657 container attach e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:15 compute-0 wizardly_sanderson[85453]: 167 167
Nov 29 05:09:15 compute-0 systemd[1]: libpod-e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25.scope: Deactivated successfully.
Nov 29 05:09:15 compute-0 conmon[85453]: conmon e873b3ea4b85a531d2a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25.scope/container/memory.events
Nov 29 05:09:15 compute-0 podman[85437]: 2025-11-29 05:09:15.692396115 +0000 UTC m=+0.152805913 container died e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-888e7c58f974391c9e1c07898db2a516076851a79ca0ee1d35907f3963f7018a-merged.mount: Deactivated successfully.
Nov 29 05:09:15 compute-0 podman[85437]: 2025-11-29 05:09:15.733795625 +0000 UTC m=+0.194205393 container remove e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_sanderson, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:15 compute-0 systemd[1]: libpod-conmon-e873b3ea4b85a531d2a1651710089990c38d5bc3cba57fc4b14cd6242f624a25.scope: Deactivated successfully.
Nov 29 05:09:15 compute-0 podman[85477]: 2025-11-29 05:09:15.904606972 +0000 UTC m=+0.047517627 container create d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:15 compute-0 systemd[1]: Started libpod-conmon-d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647.scope.
Nov 29 05:09:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:15 compute-0 podman[85477]: 2025-11-29 05:09:15.885771378 +0000 UTC m=+0.028682013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:15 compute-0 podman[85477]: 2025-11-29 05:09:15.981863131 +0000 UTC m=+0.124773766 container init d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:09:15 compute-0 podman[85477]: 2025-11-29 05:09:15.992650228 +0000 UTC m=+0.135560843 container start d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 05:09:15 compute-0 podman[85477]: 2025-11-29 05:09:15.995855738 +0000 UTC m=+0.138766353 container attach d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:16 compute-0 ceph-mon[75176]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:16 compute-0 ceph-mgr[75473]: [progress INFO root] Writing back 3 completed events
Nov 29 05:09:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 05:09:16 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:17 compute-0 elated_leakey[85493]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:09:17 compute-0 elated_leakey[85493]: --> relative data size: 1.0
Nov 29 05:09:17 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 05:09:17 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3cc3f442-c807-4e2a-868e-a4aae87af231
Nov 29 05:09:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231"} v 0) v1
Nov 29 05:09:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2312444307' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231"}]: dispatch
Nov 29 05:09:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 29 05:09:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:09:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2312444307' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231"}]': finished
Nov 29 05:09:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 05:09:17 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 29 05:09:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:17 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:17 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 05:09:17 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 05:09:17 compute-0 lvm[85554]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 05:09:17 compute-0 lvm[85554]: VG ceph_vg0 finished
Nov 29 05:09:17 compute-0 elated_leakey[85493]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 29 05:09:17 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 05:09:17 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 05:09:17 compute-0 elated_leakey[85493]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:17 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 29 05:09:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 05:09:18 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770386242' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 05:09:18 compute-0 elated_leakey[85493]:  stderr: got monmap epoch 1
Nov 29 05:09:18 compute-0 elated_leakey[85493]: --> Creating keyring file for osd.0
Nov 29 05:09:18 compute-0 ceph-mon[75176]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:18 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2312444307' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231"}]: dispatch
Nov 29 05:09:18 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2312444307' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231"}]': finished
Nov 29 05:09:18 compute-0 ceph-mon[75176]: osdmap e4: 1 total, 0 up, 1 in
Nov 29 05:09:18 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:18 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3770386242' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 05:09:18 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 29 05:09:18 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 29 05:09:18 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 3cc3f442-c807-4e2a-868e-a4aae87af231 --setuser ceph --setgroup ceph
Nov 29 05:09:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:19 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 05:09:19 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 05:09:19 compute-0 ceph-mon[75176]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 05:09:19 compute-0 ceph-mon[75176]: Cluster is now healthy
Nov 29 05:09:20 compute-0 ceph-mon[75176]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:20 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:18.457+0000 7fa8ae499740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 05:09:20 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:18.457+0000 7fa8ae499740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 05:09:20 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:18.457+0000 7fa8ae499740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 05:09:20 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:18.457+0000 7fa8ae499740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 29 05:09:20 compute-0 elated_leakey[85493]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 05:09:20 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 05:09:20 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 29 05:09:20 compute-0 elated_leakey[85493]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:20 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:20 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 05:09:20 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 05:09:20 compute-0 elated_leakey[85493]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 29 05:09:20 compute-0 elated_leakey[85493]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 05:09:20 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 05:09:20 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b9801566-0c31-4202-a669-811037218c27
Nov 29 05:09:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "b9801566-0c31-4202-a669-811037218c27"} v 0) v1
Nov 29 05:09:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46659408' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9801566-0c31-4202-a669-811037218c27"}]: dispatch
Nov 29 05:09:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 29 05:09:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:09:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46659408' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9801566-0c31-4202-a669-811037218c27"}]': finished
Nov 29 05:09:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 05:09:21 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 29 05:09:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:21 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 05:09:21 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:21 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/46659408' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9801566-0c31-4202-a669-811037218c27"}]: dispatch
Nov 29 05:09:21 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/46659408' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9801566-0c31-4202-a669-811037218c27"}]': finished
Nov 29 05:09:21 compute-0 ceph-mon[75176]: osdmap e5: 2 total, 0 up, 2 in
Nov 29 05:09:21 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:21 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:21 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 05:09:21 compute-0 lvm[86493]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 05:09:21 compute-0 lvm[86493]: VG ceph_vg1 finished
Nov 29 05:09:21 compute-0 elated_leakey[85493]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 29 05:09:21 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 29 05:09:21 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 05:09:21 compute-0 elated_leakey[85493]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:21 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 29 05:09:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 05:09:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668964783' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 05:09:21 compute-0 elated_leakey[85493]:  stderr: got monmap epoch 1
Nov 29 05:09:21 compute-0 elated_leakey[85493]: --> Creating keyring file for osd.1
Nov 29 05:09:21 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 29 05:09:22 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 29 05:09:22 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid b9801566-0c31-4202-a669-811037218c27 --setuser ceph --setgroup ceph
Nov 29 05:09:22 compute-0 ceph-mon[75176]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:22 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2668964783' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 05:09:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:24 compute-0 ceph-mon[75176]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:25 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:22.075+0000 7fc154fd0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 05:09:25 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:22.075+0000 7fc154fd0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 05:09:25 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:22.075+0000 7fc154fd0740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 05:09:25 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:22.075+0000 7fc154fd0740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 29 05:09:25 compute-0 elated_leakey[85493]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 29 05:09:25 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 05:09:25 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 29 05:09:25 compute-0 elated_leakey[85493]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:25 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:25 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 05:09:25 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 05:09:25 compute-0 elated_leakey[85493]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 29 05:09:25 compute-0 elated_leakey[85493]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 29 05:09:25 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 05:09:25 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new eec69945-b157-41e1-8fba-3992c2dca958
Nov 29 05:09:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "eec69945-b157-41e1-8fba-3992c2dca958"} v 0) v1
Nov 29 05:09:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/876598387' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "eec69945-b157-41e1-8fba-3992c2dca958"}]: dispatch
Nov 29 05:09:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 29 05:09:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:09:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/876598387' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "eec69945-b157-41e1-8fba-3992c2dca958"}]': finished
Nov 29 05:09:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 29 05:09:25 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 29 05:09:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:25 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 05:09:25 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:25 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:26 compute-0 lvm[87427]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 05:09:26 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 05:09:26 compute-0 lvm[87427]: VG ceph_vg2 finished
Nov 29 05:09:26 compute-0 elated_leakey[85493]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 29 05:09:26 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 29 05:09:26 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 05:09:26 compute-0 elated_leakey[85493]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:26 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 29 05:09:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 05:09:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/234793256' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 05:09:26 compute-0 elated_leakey[85493]:  stderr: got monmap epoch 1
Nov 29 05:09:26 compute-0 elated_leakey[85493]: --> Creating keyring file for osd.2
Nov 29 05:09:26 compute-0 ceph-mon[75176]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/876598387' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "eec69945-b157-41e1-8fba-3992c2dca958"}]: dispatch
Nov 29 05:09:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/876598387' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "eec69945-b157-41e1-8fba-3992c2dca958"}]': finished
Nov 29 05:09:26 compute-0 ceph-mon[75176]: osdmap e6: 3 total, 0 up, 3 in
Nov 29 05:09:26 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:26 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:26 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/234793256' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 05:09:26 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 29 05:09:26 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 29 05:09:26 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid eec69945-b157-41e1-8fba-3992c2dca958 --setuser ceph --setgroup ceph
Nov 29 05:09:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:28 compute-0 ceph-mon[75176]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:29 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:26.761+0000 7f1e882aa740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 05:09:29 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:26.761+0000 7f1e882aa740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 05:09:29 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:26.762+0000 7f1e882aa740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 05:09:29 compute-0 elated_leakey[85493]:  stderr: 2025-11-29T05:09:26.762+0000 7f1e882aa740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 29 05:09:29 compute-0 elated_leakey[85493]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 29 05:09:29 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 05:09:29 compute-0 elated_leakey[85493]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 29 05:09:29 compute-0 elated_leakey[85493]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:29 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:29 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 05:09:29 compute-0 elated_leakey[85493]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 05:09:29 compute-0 elated_leakey[85493]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 29 05:09:29 compute-0 elated_leakey[85493]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 29 05:09:29 compute-0 systemd[1]: libpod-d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647.scope: Deactivated successfully.
Nov 29 05:09:29 compute-0 systemd[1]: libpod-d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647.scope: Consumed 6.664s CPU time.
Nov 29 05:09:29 compute-0 podman[88336]: 2025-11-29 05:09:29.899419634 +0000 UTC m=+0.022936848 container died d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:09:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-32f306970a3bc15dd5461875b8099e5da3a69faec103d26569ee9582416cef27-merged.mount: Deactivated successfully.
Nov 29 05:09:29 compute-0 podman[88336]: 2025-11-29 05:09:29.956904612 +0000 UTC m=+0.080421836 container remove d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 05:09:29 compute-0 systemd[1]: libpod-conmon-d306210574cfeb820d6a5ed031159e20d4f46e9d0533dfd5f9f30f0f37d98647.scope: Deactivated successfully.
Nov 29 05:09:29 compute-0 sudo[85370]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:30 compute-0 sudo[88351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:30 compute-0 sudo[88351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:30 compute-0 sudo[88351]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:30 compute-0 sudo[88376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:30 compute-0 sudo[88376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:30 compute-0 sudo[88376]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:30 compute-0 sudo[88401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:30 compute-0 sudo[88401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:30 compute-0 sudo[88401]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:30 compute-0 sudo[88426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:09:30 compute-0 sudo[88426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:30 compute-0 podman[88491]: 2025-11-29 05:09:30.56242659 +0000 UTC m=+0.045339943 container create fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:09:30 compute-0 systemd[1]: Started libpod-conmon-fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee.scope.
Nov 29 05:09:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:30 compute-0 podman[88491]: 2025-11-29 05:09:30.541189164 +0000 UTC m=+0.024102567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:30 compute-0 podman[88491]: 2025-11-29 05:09:30.638765706 +0000 UTC m=+0.121679069 container init fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:30 compute-0 podman[88491]: 2025-11-29 05:09:30.645783307 +0000 UTC m=+0.128696640 container start fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:09:30 compute-0 podman[88491]: 2025-11-29 05:09:30.64874835 +0000 UTC m=+0.131661723 container attach fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:30 compute-0 tender_lichterman[88507]: 167 167
Nov 29 05:09:30 compute-0 systemd[1]: libpod-fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee.scope: Deactivated successfully.
Nov 29 05:09:30 compute-0 podman[88491]: 2025-11-29 05:09:30.650702497 +0000 UTC m=+0.133615880 container died fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:09:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f7e1dc1ab94b21ea213afead605a2df305d5b1d730c3bdfc3b22576444e2ab3-merged.mount: Deactivated successfully.
Nov 29 05:09:30 compute-0 podman[88491]: 2025-11-29 05:09:30.689041009 +0000 UTC m=+0.171954332 container remove fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lichterman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:09:30 compute-0 systemd[1]: libpod-conmon-fea3339c98c802760d385862f397d59bf74a808b4b8eb377610eaf7a00dd56ee.scope: Deactivated successfully.
Nov 29 05:09:30 compute-0 ceph-mon[75176]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:30 compute-0 podman[88530]: 2025-11-29 05:09:30.83704495 +0000 UTC m=+0.034322046 container create b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 05:09:30 compute-0 systemd[1]: Started libpod-conmon-b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b.scope.
Nov 29 05:09:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726aed43e815f1049c7231b733ef2e72dada71830be4431be822841288a9a75f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726aed43e815f1049c7231b733ef2e72dada71830be4431be822841288a9a75f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726aed43e815f1049c7231b733ef2e72dada71830be4431be822841288a9a75f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726aed43e815f1049c7231b733ef2e72dada71830be4431be822841288a9a75f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:30 compute-0 podman[88530]: 2025-11-29 05:09:30.912593307 +0000 UTC m=+0.109870423 container init b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:30 compute-0 podman[88530]: 2025-11-29 05:09:30.823022598 +0000 UTC m=+0.020299714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:30 compute-0 podman[88530]: 2025-11-29 05:09:30.920813877 +0000 UTC m=+0.118090973 container start b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 05:09:30 compute-0 podman[88530]: 2025-11-29 05:09:30.923699067 +0000 UTC m=+0.120976163 container attach b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:09:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]: {
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:     "0": [
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:         {
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "devices": [
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "/dev/loop3"
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             ],
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_name": "ceph_lv0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_size": "21470642176",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "name": "ceph_lv0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "tags": {
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.cluster_name": "ceph",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.crush_device_class": "",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.encrypted": "0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.osd_id": "0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.type": "block",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.vdo": "0"
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             },
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "type": "block",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "vg_name": "ceph_vg0"
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:         }
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:     ],
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:     "1": [
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:         {
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "devices": [
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "/dev/loop4"
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             ],
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_name": "ceph_lv1",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_size": "21470642176",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "name": "ceph_lv1",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "tags": {
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.cluster_name": "ceph",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.crush_device_class": "",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.encrypted": "0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.osd_id": "1",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.type": "block",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.vdo": "0"
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             },
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "type": "block",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "vg_name": "ceph_vg1"
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:         }
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:     ],
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:     "2": [
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:         {
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "devices": [
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "/dev/loop5"
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             ],
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_name": "ceph_lv2",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_size": "21470642176",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "name": "ceph_lv2",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "tags": {
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.cluster_name": "ceph",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.crush_device_class": "",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.encrypted": "0",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.osd_id": "2",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.type": "block",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:                 "ceph.vdo": "0"
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             },
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "type": "block",
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:             "vg_name": "ceph_vg2"
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:         }
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]:     ]
Nov 29 05:09:31 compute-0 inspiring_maxwell[88547]: }
Nov 29 05:09:31 compute-0 systemd[1]: libpod-b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b.scope: Deactivated successfully.
Nov 29 05:09:31 compute-0 podman[88530]: 2025-11-29 05:09:31.68446361 +0000 UTC m=+0.881740726 container died b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:09:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-726aed43e815f1049c7231b733ef2e72dada71830be4431be822841288a9a75f-merged.mount: Deactivated successfully.
Nov 29 05:09:31 compute-0 podman[88530]: 2025-11-29 05:09:31.74815764 +0000 UTC m=+0.945434746 container remove b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_maxwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:31 compute-0 systemd[1]: libpod-conmon-b243253ac259234b24760f726b07c11d8980e0b2e31ac23893904b754329915b.scope: Deactivated successfully.
Nov 29 05:09:31 compute-0 sudo[88426]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 05:09:31 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 05:09:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:31 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:31 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 29 05:09:31 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 29 05:09:31 compute-0 sudo[88570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:31 compute-0 sudo[88570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:31 compute-0 sudo[88570]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:31 compute-0 sudo[88595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:31 compute-0 sudo[88595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:31 compute-0 sudo[88595]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:32 compute-0 sudo[88620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:32 compute-0 sudo[88620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:32 compute-0 sudo[88620]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:32 compute-0 sudo[88645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:09:32 compute-0 sudo[88645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:32 compute-0 podman[88711]: 2025-11-29 05:09:32.421804754 +0000 UTC m=+0.054886415 container create 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 05:09:32 compute-0 systemd[1]: Started libpod-conmon-328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c.scope.
Nov 29 05:09:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:32 compute-0 podman[88711]: 2025-11-29 05:09:32.394810808 +0000 UTC m=+0.027892559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:32 compute-0 podman[88711]: 2025-11-29 05:09:32.493128929 +0000 UTC m=+0.126210670 container init 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:32 compute-0 podman[88711]: 2025-11-29 05:09:32.498629773 +0000 UTC m=+0.131711464 container start 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:32 compute-0 hopeful_visvesvaraya[88727]: 167 167
Nov 29 05:09:32 compute-0 systemd[1]: libpod-328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c.scope: Deactivated successfully.
Nov 29 05:09:32 compute-0 podman[88711]: 2025-11-29 05:09:32.503356598 +0000 UTC m=+0.136438299 container attach 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:32 compute-0 podman[88711]: 2025-11-29 05:09:32.504113136 +0000 UTC m=+0.137194837 container died 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 05:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-22468dff33d3adc570d9d2783d54f56121d7486c53d08cb3dbceec1506a158d8-merged.mount: Deactivated successfully.
Nov 29 05:09:32 compute-0 podman[88711]: 2025-11-29 05:09:32.552419071 +0000 UTC m=+0.185500732 container remove 328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_visvesvaraya, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 05:09:32 compute-0 systemd[1]: libpod-conmon-328db0100d920862f1534785d688e9a31a4f707d90a6dc17037bc4f39461cf2c.scope: Deactivated successfully.
Nov 29 05:09:32 compute-0 ceph-mon[75176]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 05:09:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:32 compute-0 ceph-mon[75176]: Deploying daemon osd.0 on compute-0
Nov 29 05:09:32 compute-0 podman[88758]: 2025-11-29 05:09:32.783850139 +0000 UTC m=+0.041987091 container create 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:32 compute-0 systemd[1]: Started libpod-conmon-6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092.scope.
Nov 29 05:09:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:32 compute-0 podman[88758]: 2025-11-29 05:09:32.763631618 +0000 UTC m=+0.021768580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:32 compute-0 podman[88758]: 2025-11-29 05:09:32.866673715 +0000 UTC m=+0.124810677 container init 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:32 compute-0 podman[88758]: 2025-11-29 05:09:32.879509457 +0000 UTC m=+0.137646409 container start 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:09:32 compute-0 podman[88758]: 2025-11-29 05:09:32.882909379 +0000 UTC m=+0.141046331 container attach 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 05:09:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:33 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test[88772]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 05:09:33 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test[88772]:                             [--no-systemd] [--no-tmpfs]
Nov 29 05:09:33 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test[88772]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 05:09:33 compute-0 systemd[1]: libpod-6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092.scope: Deactivated successfully.
Nov 29 05:09:33 compute-0 podman[88758]: 2025-11-29 05:09:33.544663595 +0000 UTC m=+0.802800547 container died 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-84ff305bb4602249a9cbfcc4a34be27be1f4a947adc8b57a5e404192f710922c-merged.mount: Deactivated successfully.
Nov 29 05:09:33 compute-0 podman[88758]: 2025-11-29 05:09:33.602083781 +0000 UTC m=+0.860220733 container remove 6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate-test, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:33 compute-0 systemd[1]: libpod-conmon-6ca5c02f1f4b1816da91226b6559f0f382948bb02c5c679ac2f632dea7679092.scope: Deactivated successfully.
Nov 29 05:09:33 compute-0 systemd[1]: Reloading.
Nov 29 05:09:33 compute-0 systemd-rc-local-generator[88836]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:09:33 compute-0 systemd-sysv-generator[88840]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:09:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:34 compute-0 systemd[1]: Reloading.
Nov 29 05:09:34 compute-0 systemd-rc-local-generator[88877]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:09:34 compute-0 systemd-sysv-generator[88881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:09:34 compute-0 systemd[1]: Starting Ceph osd.0 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:09:34 compute-0 podman[88934]: 2025-11-29 05:09:34.763637772 +0000 UTC m=+0.047493555 container create bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:34 compute-0 ceph-mon[75176]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:34 compute-0 podman[88934]: 2025-11-29 05:09:34.741904074 +0000 UTC m=+0.025759867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:34 compute-0 podman[88934]: 2025-11-29 05:09:34.875674438 +0000 UTC m=+0.159530241 container init bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:34 compute-0 podman[88934]: 2025-11-29 05:09:34.886708445 +0000 UTC m=+0.170564218 container start bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:34 compute-0 podman[88934]: 2025-11-29 05:09:34.890876897 +0000 UTC m=+0.174732710 container attach bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:09:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:35 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 05:09:35 compute-0 bash[88934]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 05:09:35 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 05:09:35 compute-0 bash[88934]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 05:09:35 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 05:09:35 compute-0 bash[88934]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 05:09:35 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 05:09:35 compute-0 bash[88934]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 05:09:35 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:35 compute-0 bash[88934]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:35 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 05:09:35 compute-0 bash[88934]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 05:09:35 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate[88950]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 05:09:35 compute-0 bash[88934]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 05:09:36 compute-0 systemd[1]: libpod-bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84.scope: Deactivated successfully.
Nov 29 05:09:36 compute-0 systemd[1]: libpod-bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84.scope: Consumed 1.131s CPU time.
Nov 29 05:09:36 compute-0 podman[89075]: 2025-11-29 05:09:36.033806986 +0000 UTC m=+0.021514195 container died bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:09:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b5799d962988a0fcc7229f3dd0c31d6a869b9a027f208d4e24ec66fc4277dce-merged.mount: Deactivated successfully.
Nov 29 05:09:36 compute-0 podman[89075]: 2025-11-29 05:09:36.083579236 +0000 UTC m=+0.071286415 container remove bb58743b13d7cf0b20dc5cf98f3c4b630cb1eb5261805fe9161cffdf3bcc8d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:09:36 compute-0 podman[89134]: 2025-11-29 05:09:36.354067415 +0000 UTC m=+0.072028273 container create a8f7d50ad538c47dd2981f8645cc6e054eee6c03a6e64995802fe2156260bd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:09:36 compute-0 podman[89134]: 2025-11-29 05:09:36.314617405 +0000 UTC m=+0.032578313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e528f89be73781845112341a55c05f4eac5e588171e15d06cda410f3ad3cd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e528f89be73781845112341a55c05f4eac5e588171e15d06cda410f3ad3cd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e528f89be73781845112341a55c05f4eac5e588171e15d06cda410f3ad3cd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e528f89be73781845112341a55c05f4eac5e588171e15d06cda410f3ad3cd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e528f89be73781845112341a55c05f4eac5e588171e15d06cda410f3ad3cd7/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:36 compute-0 podman[89134]: 2025-11-29 05:09:36.449502117 +0000 UTC m=+0.167462965 container init a8f7d50ad538c47dd2981f8645cc6e054eee6c03a6e64995802fe2156260bd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:36 compute-0 podman[89134]: 2025-11-29 05:09:36.459088389 +0000 UTC m=+0.177049217 container start a8f7d50ad538c47dd2981f8645cc6e054eee6c03a6e64995802fe2156260bd59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:36 compute-0 bash[89134]: a8f7d50ad538c47dd2981f8645cc6e054eee6c03a6e64995802fe2156260bd59
Nov 29 05:09:36 compute-0 systemd[1]: Started Ceph osd.0 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:09:36 compute-0 sudo[88645]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:36 compute-0 ceph-osd[89151]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 05:09:36 compute-0 ceph-osd[89151]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 05:09:36 compute-0 ceph-osd[89151]: pidfile_write: ignore empty --pid-file
Nov 29 05:09:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bdev(0x55c4e59e3800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bdev(0x55c4e59e3800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bdev(0x55c4e59e3800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bdev(0x55c4e59e3800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bdev(0x55c4e681b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bdev(0x55c4e681b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bdev(0x55c4e681b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bdev(0x55c4e681b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bdev(0x55c4e681b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 05:09:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 05:09:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 05:09:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:36 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:36 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 29 05:09:36 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 29 05:09:36 compute-0 sudo[89164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:36 compute-0 sudo[89164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:36 compute-0 sudo[89164]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:36 compute-0 sudo[89189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:36 compute-0 sudo[89189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:36 compute-0 sudo[89189]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:36 compute-0 sudo[89214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:36 compute-0 sudo[89214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:36 compute-0 sudo[89214]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:36 compute-0 ceph-mon[75176]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:36 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:36 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:36 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 05:09:36 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:36 compute-0 ceph-osd[89151]: bdev(0x55c4e59e3800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 05:09:36 compute-0 sudo[89239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:09:36 compute-0 sudo[89239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:37 compute-0 ceph-osd[89151]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 29 05:09:37 compute-0 ceph-osd[89151]: load: jerasure load: lrc 
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 05:09:37 compute-0 podman[89313]: 2025-11-29 05:09:37.20493998 +0000 UTC m=+0.039552853 container create 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 29 05:09:37 compute-0 systemd[1]: Started libpod-conmon-25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522.scope.
Nov 29 05:09:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:37 compute-0 podman[89313]: 2025-11-29 05:09:37.188360617 +0000 UTC m=+0.022973510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:37 compute-0 podman[89313]: 2025-11-29 05:09:37.290641504 +0000 UTC m=+0.125254407 container init 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 29 05:09:37 compute-0 podman[89313]: 2025-11-29 05:09:37.298973647 +0000 UTC m=+0.133586520 container start 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:37 compute-0 podman[89313]: 2025-11-29 05:09:37.302865362 +0000 UTC m=+0.137478255 container attach 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:09:37 compute-0 practical_cartwright[89329]: 167 167
Nov 29 05:09:37 compute-0 systemd[1]: libpod-25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522.scope: Deactivated successfully.
Nov 29 05:09:37 compute-0 conmon[89329]: conmon 25db41e04cd8f87d1a68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522.scope/container/memory.events
Nov 29 05:09:37 compute-0 podman[89313]: 2025-11-29 05:09:37.306435489 +0000 UTC m=+0.141048362 container died 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 05:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-23f65bd79d9b9839a322df7d8593c915db149e6bc5532f94a4924bbfb08be18b-merged.mount: Deactivated successfully.
Nov 29 05:09:37 compute-0 podman[89313]: 2025-11-29 05:09:37.356639749 +0000 UTC m=+0.191252622 container remove 25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cartwright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 05:09:37 compute-0 systemd[1]: libpod-conmon-25db41e04cd8f87d1a68ab149a654f472962586eb03ad6aa95246de049e64522.scope: Deactivated successfully.
Nov 29 05:09:37 compute-0 ceph-osd[89151]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 05:09:37 compute-0 ceph-osd[89151]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluefs mount
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluefs mount shared_bdev_used = 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: RocksDB version: 7.9.2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Git sha 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: DB SUMMARY
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: DB Session ID:  GB8E2MAM6AAV9M8FEZQI
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: CURRENT file:  CURRENT
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                         Options.error_if_exists: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.create_if_missing: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                                     Options.env: 0x55c4e686d2d0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                                Options.info_log: 0x55c4e5a6a8a0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                              Options.statistics: (nil)
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.use_fsync: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                              Options.db_log_dir: 
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.write_buffer_manager: 0x55c4e6976460
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.unordered_write: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.row_cache: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                              Options.wal_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.two_write_queues: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.wal_compression: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.atomic_flush: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.max_background_jobs: 4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.max_background_compactions: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.max_subcompactions: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.max_open_files: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Compression algorithms supported:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kZSTD supported: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kXpressCompression supported: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kBZip2Compression supported: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kLZ4Compression supported: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kZlibCompression supported: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kSnappyCompression supported: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a2c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a57090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a57090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a6a240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a57090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ae016f9e-706d-4aae-a4b3-9ea8654bd733
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977614015, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977614300, "job": 1, "event": "recovery_finished"}
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: freelist init
Nov 29 05:09:37 compute-0 ceph-osd[89151]: freelist _read_cfg
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluefs umount
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 05:09:37 compute-0 podman[89379]: 2025-11-29 05:09:37.662818036 +0000 UTC m=+0.052913078 container create ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 29 05:09:37 compute-0 systemd[1]: Started libpod-conmon-ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2.scope.
Nov 29 05:09:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:37 compute-0 podman[89379]: 2025-11-29 05:09:37.632245943 +0000 UTC m=+0.022340965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:37 compute-0 podman[89379]: 2025-11-29 05:09:37.741681794 +0000 UTC m=+0.131776786 container init ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:09:37 compute-0 podman[89379]: 2025-11-29 05:09:37.758686629 +0000 UTC m=+0.148781621 container start ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:37 compute-0 podman[89379]: 2025-11-29 05:09:37.761562149 +0000 UTC m=+0.151657141 container attach ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 05:09:37 compute-0 ceph-mon[75176]: Deploying daemon osd.1 on compute-0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bdev(0x55c4e689d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluefs mount
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluefs mount shared_bdev_used = 4718592
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: RocksDB version: 7.9.2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Git sha 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: DB SUMMARY
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: DB Session ID:  GB8E2MAM6AAV9M8FEZQJ
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: CURRENT file:  CURRENT
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                         Options.error_if_exists: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.create_if_missing: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                                     Options.env: 0x55c4e5bbf8f0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                                Options.info_log: 0x55c4e5a61180
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                              Options.statistics: (nil)
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.use_fsync: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                              Options.db_log_dir: 
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.write_buffer_manager: 0x55c4e69766e0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.unordered_write: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.row_cache: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                              Options.wal_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.two_write_queues: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.wal_compression: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.atomic_flush: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.max_background_jobs: 4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.max_background_compactions: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.max_subcompactions: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.max_open_files: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Compression algorithms supported:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kZSTD supported: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kXpressCompression supported: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kBZip2Compression supported: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kLZ4Compression supported: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kZlibCompression supported: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         kSnappyCompression supported: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a571f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a57090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a57090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4e5a61fa0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55c4e5a57090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ae016f9e-706d-4aae-a4b3-9ea8654bd733
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977907666, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977912360, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ae016f9e-706d-4aae-a4b3-9ea8654bd733", "db_session_id": "GB8E2MAM6AAV9M8FEZQJ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977915714, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ae016f9e-706d-4aae-a4b3-9ea8654bd733", "db_session_id": "GB8E2MAM6AAV9M8FEZQJ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977918686, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392977, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ae016f9e-706d-4aae-a4b3-9ea8654bd733", "db_session_id": "GB8E2MAM6AAV9M8FEZQJ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392977920422, "job": 1, "event": "recovery_finished"}
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c4e5bc5c00
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: DB pointer 0x55c4e695fa00
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 29 05:09:37 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:09:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:09:37 compute-0 ceph-osd[89151]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 05:09:37 compute-0 ceph-osd[89151]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 05:09:37 compute-0 ceph-osd[89151]: _get_class not permitted to load lua
Nov 29 05:09:37 compute-0 ceph-osd[89151]: _get_class not permitted to load sdk
Nov 29 05:09:37 compute-0 ceph-osd[89151]: _get_class not permitted to load test_remote_reads
Nov 29 05:09:37 compute-0 ceph-osd[89151]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 05:09:37 compute-0 ceph-osd[89151]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 05:09:37 compute-0 ceph-osd[89151]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 05:09:37 compute-0 ceph-osd[89151]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 05:09:37 compute-0 ceph-osd[89151]: osd.0 0 load_pgs
Nov 29 05:09:37 compute-0 ceph-osd[89151]: osd.0 0 load_pgs opened 0 pgs
Nov 29 05:09:37 compute-0 ceph-osd[89151]: osd.0 0 log_to_monitors true
Nov 29 05:09:37 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0[89147]: 2025-11-29T05:09:37.950+0000 7fc8efb21740 -1 osd.0 0 log_to_monitors true
Nov 29 05:09:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 29 05:09:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 05:09:38 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test[89575]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 05:09:38 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test[89575]:                             [--no-systemd] [--no-tmpfs]
Nov 29 05:09:38 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test[89575]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 05:09:38 compute-0 systemd[1]: libpod-ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2.scope: Deactivated successfully.
Nov 29 05:09:38 compute-0 podman[89379]: 2025-11-29 05:09:38.377239192 +0000 UTC m=+0.767334284 container died ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 05:09:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-950b914a4830311be9afa79cb2c7b9eeb1be2e6f1b23a3d9841fb69a59535434-merged.mount: Deactivated successfully.
Nov 29 05:09:38 compute-0 podman[89379]: 2025-11-29 05:09:38.44617507 +0000 UTC m=+0.836270062 container remove ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:09:38 compute-0 systemd[1]: libpod-conmon-ef7d3b86443196943ba6090c7bcde6304ec4147a75e0a7b4b5a47edaca7826a2.scope: Deactivated successfully.
Nov 29 05:09:38 compute-0 systemd[1]: Reloading.
Nov 29 05:09:38 compute-0 systemd-rc-local-generator[89854]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:09:38 compute-0 systemd-sysv-generator[89858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:09:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 29 05:09:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:09:38 compute-0 ceph-mon[75176]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:38 compute-0 ceph-mon[75176]: from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 05:09:38 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 05:09:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 29 05:09:38 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 29 05:09:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 05:09:38 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 05:09:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 05:09:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:38 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:38 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:38 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:38 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 05:09:38 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:38 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:38 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 05:09:38 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 05:09:39 compute-0 systemd[1]: Reloading.
Nov 29 05:09:39 compute-0 systemd-rc-local-generator[89894]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:09:39 compute-0 systemd-sysv-generator[89897]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:09:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:39 compute-0 systemd[1]: Starting Ceph osd.1 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:09:39 compute-0 podman[89953]: 2025-11-29 05:09:39.502981793 +0000 UTC m=+0.052648311 container create 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:09:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:39 compute-0 podman[89953]: 2025-11-29 05:09:39.480357883 +0000 UTC m=+0.030024431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:39 compute-0 podman[89953]: 2025-11-29 05:09:39.582784524 +0000 UTC m=+0.132451072 container init 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:09:39 compute-0 podman[89953]: 2025-11-29 05:09:39.594507569 +0000 UTC m=+0.144174077 container start 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:09:39 compute-0 podman[89953]: 2025-11-29 05:09:39.597654076 +0000 UTC m=+0.147320634 container attach 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 29 05:09:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:09:39 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 05:09:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 29 05:09:39 compute-0 ceph-osd[89151]: osd.0 0 done with init, starting boot process
Nov 29 05:09:39 compute-0 ceph-osd[89151]: osd.0 0 start_boot
Nov 29 05:09:39 compute-0 ceph-osd[89151]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 05:09:39 compute-0 ceph-osd[89151]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 05:09:39 compute-0 ceph-osd[89151]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 05:09:39 compute-0 ceph-osd[89151]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 05:09:39 compute-0 ceph-osd[89151]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 29 05:09:39 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 29 05:09:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:39 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:39 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:39 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 05:09:39 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:39 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:39 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:39 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3779420554; not ready for session (expect reconnect)
Nov 29 05:09:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:39 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:39 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 05:09:39 compute-0 ceph-mon[75176]: from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 05:09:39 compute-0 ceph-mon[75176]: osdmap e7: 3 total, 0 up, 3 in
Nov 29 05:09:39 compute-0 ceph-mon[75176]: from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 05:09:39 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:39 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:39 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 05:09:40 compute-0 bash[89953]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 05:09:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 05:09:40 compute-0 bash[89953]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 05:09:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 05:09:40 compute-0 bash[89953]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 29 05:09:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 05:09:40 compute-0 bash[89953]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 29 05:09:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:40 compute-0 bash[89953]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 05:09:40 compute-0 bash[89953]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 29 05:09:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate[89969]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 05:09:40 compute-0 bash[89953]: --> ceph-volume raw activate successful for osd ID: 1
Nov 29 05:09:40 compute-0 systemd[1]: libpod-6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448.scope: Deactivated successfully.
Nov 29 05:09:40 compute-0 podman[89953]: 2025-11-29 05:09:40.611150736 +0000 UTC m=+1.160817234 container died 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:09:40 compute-0 systemd[1]: libpod-6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448.scope: Consumed 1.030s CPU time.
Nov 29 05:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-960ae188e036cc45b5552413b8bd736e956a482e29f1eabd885be99626fbf8ba-merged.mount: Deactivated successfully.
Nov 29 05:09:40 compute-0 podman[89953]: 2025-11-29 05:09:40.752509725 +0000 UTC m=+1.302176273 container remove 6e5680f38e24f57608f1c2ef16671defabac7dc59f31b316924867ea49f92448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:40 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3779420554; not ready for session (expect reconnect)
Nov 29 05:09:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:40 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:40 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 05:09:40 compute-0 ceph-mon[75176]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:40 compute-0 ceph-mon[75176]: from='osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 05:09:40 compute-0 ceph-mon[75176]: osdmap e8: 3 total, 0 up, 3 in
Nov 29 05:09:40 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:40 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:40 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:40 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:40 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:40 compute-0 podman[90161]: 2025-11-29 05:09:40.956584728 +0000 UTC m=+0.054755763 container create 82f057625789bfb7e6d0b1b3b5254ab9a549f654018026bd598287a74fc7a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 05:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee71a98009077c29ff391c32549068912c072663c7fc165f250ed8e2140dd683/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee71a98009077c29ff391c32549068912c072663c7fc165f250ed8e2140dd683/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee71a98009077c29ff391c32549068912c072663c7fc165f250ed8e2140dd683/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee71a98009077c29ff391c32549068912c072663c7fc165f250ed8e2140dd683/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee71a98009077c29ff391c32549068912c072663c7fc165f250ed8e2140dd683/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:41 compute-0 podman[90161]: 2025-11-29 05:09:40.922381186 +0000 UTC m=+0.020552231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:41 compute-0 podman[90161]: 2025-11-29 05:09:41.044878466 +0000 UTC m=+0.143049521 container init 82f057625789bfb7e6d0b1b3b5254ab9a549f654018026bd598287a74fc7a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 05:09:41 compute-0 podman[90161]: 2025-11-29 05:09:41.051607439 +0000 UTC m=+0.149778464 container start 82f057625789bfb7e6d0b1b3b5254ab9a549f654018026bd598287a74fc7a45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 05:09:41 compute-0 bash[90161]: 82f057625789bfb7e6d0b1b3b5254ab9a549f654018026bd598287a74fc7a45e
Nov 29 05:09:41 compute-0 systemd[1]: Started Ceph osd.1 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:09:41 compute-0 ceph-osd[90181]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 05:09:41 compute-0 ceph-osd[90181]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 05:09:41 compute-0 ceph-osd[90181]: pidfile_write: ignore empty --pid-file
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x5590958d9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x5590958d9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x5590958d9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x5590958d9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x559096713800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x559096713800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x559096713800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x559096713800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x559096713800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 05:09:41 compute-0 sudo[89239]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 29 05:09:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 05:09:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 29 05:09:41 compute-0 sudo[90194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:41 compute-0 sudo[90194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:41 compute-0 sudo[90194]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:09:41
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [balancer INFO root] No pools available
Nov 29 05:09:41 compute-0 sudo[90219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:41 compute-0 sudo[90219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:41 compute-0 sudo[90219]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x5590958d9800 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:09:41 compute-0 sudo[90244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:41 compute-0 sudo[90244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:41 compute-0 sudo[90244]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:41 compute-0 sudo[90269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:09:41 compute-0 sudo[90269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:41 compute-0 ceph-osd[90181]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 29 05:09:41 compute-0 ceph-osd[90181]: load: jerasure load: lrc 
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3779420554; not ready for session (expect reconnect)
Nov 29 05:09:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:41 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 05:09:41 compute-0 podman[90341]: 2025-11-29 05:09:41.86039418 +0000 UTC m=+0.051405271 container create 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 05:09:41 compute-0 ceph-mon[75176]: purged_snaps scrub starts
Nov 29 05:09:41 compute-0 ceph-mon[75176]: purged_snaps scrub ok
Nov 29 05:09:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 05:09:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:41 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 05:09:41 compute-0 systemd[1]: Started libpod-conmon-19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9.scope.
Nov 29 05:09:41 compute-0 podman[90341]: 2025-11-29 05:09:41.833167529 +0000 UTC m=+0.024178700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:41 compute-0 podman[90341]: 2025-11-29 05:09:41.965823254 +0000 UTC m=+0.156834365 container init 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:09:41 compute-0 podman[90341]: 2025-11-29 05:09:41.974083785 +0000 UTC m=+0.165094876 container start 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:41 compute-0 epic_chebyshev[90361]: 167 167
Nov 29 05:09:41 compute-0 systemd[1]: libpod-19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9.scope: Deactivated successfully.
Nov 29 05:09:41 compute-0 podman[90341]: 2025-11-29 05:09:41.989976583 +0000 UTC m=+0.180987674 container attach 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 05:09:41 compute-0 podman[90341]: 2025-11-29 05:09:41.990605237 +0000 UTC m=+0.181616328 container died 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 05:09:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b819f80dbab16a28f49433139465e4f05ae821c6cb83d4c113f3bf532e1e53a7-merged.mount: Deactivated successfully.
Nov 29 05:09:42 compute-0 podman[90341]: 2025-11-29 05:09:42.08238485 +0000 UTC m=+0.273395981 container remove 19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:42 compute-0 systemd[1]: libpod-conmon-19557f9e3027f3a9cccbf46b55cfed6ad8c49bab06ff87d02d96697be960eda9.scope: Deactivated successfully.
Nov 29 05:09:42 compute-0 ceph-osd[90181]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 05:09:42 compute-0 ceph-osd[90181]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679ec00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluefs mount
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluefs mount shared_bdev_used = 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: RocksDB version: 7.9.2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Git sha 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: DB SUMMARY
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: DB Session ID:  Y4COQFGEX2AH8MDYLW2D
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: CURRENT file:  CURRENT
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                         Options.error_if_exists: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.create_if_missing: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                                     Options.env: 0x559096765c70
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                                Options.info_log: 0x5590959608a0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                              Options.statistics: (nil)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.use_fsync: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                              Options.db_log_dir: 
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.write_buffer_manager: 0x559096876460
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.unordered_write: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.row_cache: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                              Options.wal_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.two_write_queues: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.wal_compression: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.atomic_flush: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.max_background_jobs: 4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.max_background_compactions: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.max_subcompactions: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.max_open_files: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Compression algorithms supported:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kZSTD supported: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kXpressCompression supported: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kBZip2Compression supported: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kLZ4Compression supported: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kZlibCompression supported: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kSnappyCompression supported: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5590959602c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 38520058-5321-4c20-b65e-18ccdc165478
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982183723, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982183904, "job": 1, "event": "recovery_finished"}
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: freelist init
Nov 29 05:09:42 compute-0 ceph-osd[90181]: freelist _read_cfg
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluefs umount
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) close
Nov 29 05:09:42 compute-0 podman[90587]: 2025-11-29 05:09:42.386487016 +0000 UTC m=+0.069336358 container create d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:42 compute-0 podman[90587]: 2025-11-29 05:09:42.342953277 +0000 UTC m=+0.025802489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bdev(0x55909679f400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluefs mount
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluefs mount shared_bdev_used = 4718592
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: RocksDB version: 7.9.2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Git sha 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: DB SUMMARY
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: DB Session ID:  Y4COQFGEX2AH8MDYLW2C
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: CURRENT file:  CURRENT
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                         Options.error_if_exists: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.create_if_missing: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                                     Options.env: 0x55909691e460
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                                Options.info_log: 0x559095960600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                              Options.statistics: (nil)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.use_fsync: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                              Options.db_log_dir: 
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.write_buffer_manager: 0x559096876460
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.unordered_write: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.row_cache: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                              Options.wal_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.two_write_queues: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.wal_compression: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.atomic_flush: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.max_background_jobs: 4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.max_background_compactions: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.max_subcompactions: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.max_open_files: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Compression algorithms supported:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kZSTD supported: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kXpressCompression supported: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kBZip2Compression supported: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kLZ4Compression supported: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kZlibCompression supported: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         kSnappyCompression supported: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 systemd[1]: Started libpod-conmon-d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3.scope.
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559095960380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55909594d090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 38520058-5321-4c20-b65e-18ccdc165478
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982468841, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982478588, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392982, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "38520058-5321-4c20-b65e-18ccdc165478", "db_session_id": "Y4COQFGEX2AH8MDYLW2C", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:09:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982523587, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392982, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "38520058-5321-4c20-b65e-18ccdc165478", "db_session_id": "Y4COQFGEX2AH8MDYLW2C", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:09:42 compute-0 podman[90587]: 2025-11-29 05:09:42.524520413 +0000 UTC m=+0.207369595 container init d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982529568, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392982, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "38520058-5321-4c20-b65e-18ccdc165478", "db_session_id": "Y4COQFGEX2AH8MDYLW2C", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:09:42 compute-0 podman[90587]: 2025-11-29 05:09:42.534616819 +0000 UTC m=+0.217466001 container start d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392982552448, "job": 1, "event": "recovery_finished"}
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 05:09:42 compute-0 podman[90587]: 2025-11-29 05:09:42.55601959 +0000 UTC m=+0.238868762 container attach d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559095aba000
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: DB pointer 0x55909685fa00
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 29 05:09:42 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:09:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:09:42 compute-0 ceph-osd[90181]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 05:09:42 compute-0 ceph-osd[90181]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 05:09:42 compute-0 ceph-osd[90181]: _get_class not permitted to load lua
Nov 29 05:09:42 compute-0 ceph-osd[90181]: _get_class not permitted to load sdk
Nov 29 05:09:42 compute-0 ceph-osd[90181]: _get_class not permitted to load test_remote_reads
Nov 29 05:09:42 compute-0 ceph-osd[90181]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 05:09:42 compute-0 ceph-osd[90181]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 05:09:42 compute-0 ceph-osd[90181]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 05:09:42 compute-0 ceph-osd[90181]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 05:09:42 compute-0 ceph-osd[90181]: osd.1 0 load_pgs
Nov 29 05:09:42 compute-0 ceph-osd[90181]: osd.1 0 load_pgs opened 0 pgs
Nov 29 05:09:42 compute-0 ceph-osd[90181]: osd.1 0 log_to_monitors true
Nov 29 05:09:42 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1[90177]: 2025-11-29T05:09:42.628+0000 7f1f71507740 -1 osd.1 0 log_to_monitors true
Nov 29 05:09:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 29 05:09:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 05:09:42 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3779420554; not ready for session (expect reconnect)
Nov 29 05:09:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:42 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 05:09:42 compute-0 ceph-mon[75176]: Deploying daemon osd.2 on compute-0
Nov 29 05:09:42 compute-0 ceph-mon[75176]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:42 compute-0 ceph-mon[75176]: from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 05:09:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:43 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test[90711]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 05:09:43 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test[90711]:                             [--no-systemd] [--no-tmpfs]
Nov 29 05:09:43 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test[90711]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 05:09:43 compute-0 systemd[1]: libpod-d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3.scope: Deactivated successfully.
Nov 29 05:09:43 compute-0 podman[90587]: 2025-11-29 05:09:43.166217761 +0000 UTC m=+0.849067033 container died d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 29 05:09:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:09:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 05:09:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Nov 29 05:09:43 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Nov 29 05:09:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 05:09:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 05:09:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 05:09:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:43 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 05:09:43 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:43 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-924fb16c42529e1b67bdb31ecee621faeb999276dbf3306ff65caf770562e09c-merged.mount: Deactivated successfully.
Nov 29 05:09:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:43 compute-0 podman[90587]: 2025-11-29 05:09:43.28048374 +0000 UTC m=+0.963332912 container remove d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate-test, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:09:43 compute-0 systemd[1]: libpod-conmon-d80e3d2accbd6a9083f5941622db183158664f5ccbc5b0582f27e6542bb706b3.scope: Deactivated successfully.
Nov 29 05:09:43 compute-0 sudo[90861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siabwanibpkugjuhhmkorbaernlltcov ; /usr/bin/python3'
Nov 29 05:09:43 compute-0 sudo[90861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:43 compute-0 python3[90863]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:43 compute-0 ceph-osd[89151]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.252 iops: 6208.455 elapsed_sec: 0.483
Nov 29 05:09:43 compute-0 ceph-osd[89151]: log_channel(cluster) log [WRN] : OSD bench result of 6208.454805 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 05:09:43 compute-0 ceph-osd[89151]: osd.0 0 waiting for initial osdmap
Nov 29 05:09:43 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0[89147]: 2025-11-29T05:09:43.492+0000 7fc8ec2b8640 -1 osd.0 0 waiting for initial osdmap
Nov 29 05:09:43 compute-0 ceph-osd[89151]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 29 05:09:43 compute-0 ceph-osd[89151]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 29 05:09:43 compute-0 ceph-osd[89151]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 29 05:09:43 compute-0 ceph-osd[89151]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Nov 29 05:09:43 compute-0 ceph-osd[89151]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 05:09:43 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-0[89147]: 2025-11-29T05:09:43.517+0000 7fc8e70c9640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 05:09:43 compute-0 ceph-osd[89151]: osd.0 9 set_numa_affinity not setting numa affinity
Nov 29 05:09:43 compute-0 ceph-osd[89151]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 05:09:43 compute-0 podman[90871]: 2025-11-29 05:09:43.531597728 +0000 UTC m=+0.055368788 container create b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:43 compute-0 systemd[1]: Started libpod-conmon-b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865.scope.
Nov 29 05:09:43 compute-0 systemd[1]: Reloading.
Nov 29 05:09:43 compute-0 podman[90871]: 2025-11-29 05:09:43.50661389 +0000 UTC m=+0.030385040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:43 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 05:09:43 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 05:09:43 compute-0 systemd-rc-local-generator[90930]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:09:43 compute-0 systemd-sysv-generator[90934]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:09:43 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3779420554; not ready for session (expect reconnect)
Nov 29 05:09:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:43 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 05:09:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a972367fe0572afc7bd50dd5091f7dc3d7e60192c89de68eff2d87ecdfd71235/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a972367fe0572afc7bd50dd5091f7dc3d7e60192c89de68eff2d87ecdfd71235/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a972367fe0572afc7bd50dd5091f7dc3d7e60192c89de68eff2d87ecdfd71235/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:43 compute-0 podman[90871]: 2025-11-29 05:09:43.952821553 +0000 UTC m=+0.476592683 container init b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 05:09:43 compute-0 podman[90871]: 2025-11-29 05:09:43.967606582 +0000 UTC m=+0.491377652 container start b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:09:43 compute-0 podman[90871]: 2025-11-29 05:09:43.97206887 +0000 UTC m=+0.495840020 container attach b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:43 compute-0 systemd[1]: Reloading.
Nov 29 05:09:44 compute-0 systemd-rc-local-generator[90971]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:09:44 compute-0 systemd-sysv-generator[90974]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:09:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 29 05:09:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:09:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 05:09:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 29 05:09:44 compute-0 ceph-osd[90181]: osd.1 0 done with init, starting boot process
Nov 29 05:09:44 compute-0 ceph-osd[90181]: osd.1 0 start_boot
Nov 29 05:09:44 compute-0 ceph-osd[90181]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 05:09:44 compute-0 ceph-osd[90181]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 05:09:44 compute-0 ceph-osd[90181]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 05:09:44 compute-0 ceph-osd[90181]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 05:09:44 compute-0 ceph-osd[90181]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 29 05:09:44 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554] boot
Nov 29 05:09:44 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 29 05:09:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 05:09:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:44 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:44 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:44 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1814125376; not ready for session (expect reconnect)
Nov 29 05:09:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:44 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:44 compute-0 ceph-mon[75176]: from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 05:09:44 compute-0 ceph-mon[75176]: osdmap e9: 3 total, 0 up, 3 in
Nov 29 05:09:44 compute-0 ceph-mon[75176]: from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 05:09:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:44 compute-0 ceph-mon[75176]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 05:09:44 compute-0 ceph-mon[75176]: OSD bench result of 6208.454805 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 05:09:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:44 compute-0 ceph-osd[89151]: osd.0 10 state: booting -> active
Nov 29 05:09:44 compute-0 systemd[1]: Starting Ceph osd.2 for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:09:44 compute-0 podman[91047]: 2025-11-29 05:09:44.565906714 +0000 UTC m=+0.058059573 container create 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 05:09:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715470949' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 05:09:44 compute-0 nice_merkle[90898]: 
Nov 29 05:09:44 compute-0 nice_merkle[90898]: {"fsid":"93f82912-647c-5e78-b081-707d0a2966d8","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":110,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":10,"num_osds":3,"num_up_osds":1,"osd_up_since":1764392984,"num_in_osds":3,"osd_in_since":1764392965,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T05:09:43.260960+0000","services":{}},"progress_events":{}}
Nov 29 05:09:44 compute-0 systemd[1]: libpod-b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865.scope: Deactivated successfully.
Nov 29 05:09:44 compute-0 podman[90871]: 2025-11-29 05:09:44.617291024 +0000 UTC m=+1.141062084 container died b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:44 compute-0 podman[91047]: 2025-11-29 05:09:44.538657591 +0000 UTC m=+0.030810530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a972367fe0572afc7bd50dd5091f7dc3d7e60192c89de68eff2d87ecdfd71235-merged.mount: Deactivated successfully.
Nov 29 05:09:44 compute-0 podman[91047]: 2025-11-29 05:09:44.694811589 +0000 UTC m=+0.186964478 container init 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:09:44 compute-0 podman[91047]: 2025-11-29 05:09:44.70018583 +0000 UTC m=+0.192338689 container start 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:09:44 compute-0 podman[91047]: 2025-11-29 05:09:44.759326499 +0000 UTC m=+0.251479368 container attach 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 05:09:44 compute-0 podman[90871]: 2025-11-29 05:09:44.789257437 +0000 UTC m=+1.313028497 container remove b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865 (image=quay.io/ceph/ceph:v18, name=nice_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:09:44 compute-0 systemd[1]: libpod-conmon-b0e0acbf26a6db540c2bfccbd444ea98826a6f114761b791cebdcc338c73c865.scope: Deactivated successfully.
Nov 29 05:09:44 compute-0 sudo[90861]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:45 compute-0 sudo[91105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epfmwcwwqqhlftuowycxazmdcvsbtjwf ; /usr/bin/python3'
Nov 29 05:09:45 compute-0 sudo[91105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:45 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1814125376; not ready for session (expect reconnect)
Nov 29 05:09:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:45 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:45 compute-0 python3[91107]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 05:09:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 29 05:09:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:09:45 compute-0 ceph-mon[75176]: from='osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 05:09:45 compute-0 ceph-mon[75176]: osd.0 [v2:192.168.122.100:6802/3779420554,v1:192.168.122.100:6803/3779420554] boot
Nov 29 05:09:45 compute-0 ceph-mon[75176]: osdmap e10: 3 total, 1 up, 3 in
Nov 29 05:09:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 05:09:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:45 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3715470949' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 05:09:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:45 compute-0 podman[91108]: 2025-11-29 05:09:45.311790025 +0000 UTC m=+0.043275533 container create 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 05:09:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 29 05:09:45 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 29 05:09:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:45 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:45 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:45 compute-0 ceph-mgr[75473]: [devicehealth INFO root] creating mgr pool
Nov 29 05:09:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 29 05:09:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 05:09:45 compute-0 systemd[1]: Started libpod-conmon-55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a.scope.
Nov 29 05:09:45 compute-0 podman[91108]: 2025-11-29 05:09:45.293345427 +0000 UTC m=+0.024830955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7598c0f4f42a120de85fc25c2eec914c851943d7550ae09de03f9a8fc4372d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7598c0f4f42a120de85fc25c2eec914c851943d7550ae09de03f9a8fc4372d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:45 compute-0 podman[91108]: 2025-11-29 05:09:45.41763852 +0000 UTC m=+0.149124048 container init 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:45 compute-0 podman[91108]: 2025-11-29 05:09:45.423722798 +0000 UTC m=+0.155208306 container start 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:45 compute-0 podman[91108]: 2025-11-29 05:09:45.439654415 +0000 UTC m=+0.171139923 container attach 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:45 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 05:09:45 compute-0 bash[91047]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 05:09:45 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 05:09:45 compute-0 bash[91047]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 05:09:45 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 05:09:45 compute-0 bash[91047]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 29 05:09:45 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 05:09:45 compute-0 bash[91047]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 29 05:09:45 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:45 compute-0 bash[91047]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:45 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 05:09:45 compute-0 bash[91047]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 29 05:09:45 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate[91064]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 05:09:45 compute-0 bash[91047]: --> ceph-volume raw activate successful for osd ID: 2
Nov 29 05:09:45 compute-0 systemd[1]: libpod-155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5.scope: Deactivated successfully.
Nov 29 05:09:45 compute-0 systemd[1]: libpod-155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5.scope: Consumed 1.030s CPU time.
Nov 29 05:09:45 compute-0 podman[91242]: 2025-11-29 05:09:45.780639649 +0000 UTC m=+0.040383353 container died 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-85712ebfc733296654e3bf2fad5f4be0350d749587509a98d20318a8730e4f56-merged.mount: Deactivated successfully.
Nov 29 05:09:45 compute-0 podman[91242]: 2025-11-29 05:09:45.909916163 +0000 UTC m=+0.169659787 container remove 155b150b3e433b73873488a46c0cad67c2ba5a4339cbc9d95ff134a708d844d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2-activate, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 05:09:46 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1816088569' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:46 compute-0 podman[91323]: 2025-11-29 05:09:46.131738009 +0000 UTC m=+0.058078644 container create 5bc94574df1b12208cc03bc87dbd53e57cea6e8697069fb41ede7eaffebca573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 05:09:46 compute-0 podman[91323]: 2025-11-29 05:09:46.098900439 +0000 UTC m=+0.025241104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0af1e304b3b7996d786c78052034fcf0e12c26019205c8895eb7863215a93d89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0af1e304b3b7996d786c78052034fcf0e12c26019205c8895eb7863215a93d89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0af1e304b3b7996d786c78052034fcf0e12c26019205c8895eb7863215a93d89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0af1e304b3b7996d786c78052034fcf0e12c26019205c8895eb7863215a93d89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0af1e304b3b7996d786c78052034fcf0e12c26019205c8895eb7863215a93d89/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:46 compute-0 podman[91323]: 2025-11-29 05:09:46.24649095 +0000 UTC m=+0.172831605 container init 5bc94574df1b12208cc03bc87dbd53e57cea6e8697069fb41ede7eaffebca573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:09:46 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1814125376; not ready for session (expect reconnect)
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:46 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:46 compute-0 podman[91323]: 2025-11-29 05:09:46.262942409 +0000 UTC m=+0.189283044 container start 5bc94574df1b12208cc03bc87dbd53e57cea6e8697069fb41ede7eaffebca573 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:46 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:46 compute-0 bash[91323]: 5bc94574df1b12208cc03bc87dbd53e57cea6e8697069fb41ede7eaffebca573
Nov 29 05:09:46 compute-0 systemd[1]: Started Ceph osd.2 for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 05:09:46 compute-0 ceph-mon[75176]: purged_snaps scrub starts
Nov 29 05:09:46 compute-0 ceph-mon[75176]: purged_snaps scrub ok
Nov 29 05:09:46 compute-0 ceph-mon[75176]: pgmap v33: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 05:09:46 compute-0 ceph-mon[75176]: osdmap e11: 3 total, 1 up, 3 in
Nov 29 05:09:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 05:09:46 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1816088569' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:46 compute-0 sudo[90269]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:46 compute-0 ceph-osd[91343]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 05:09:46 compute-0 ceph-osd[91343]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 05:09:46 compute-0 ceph-osd[91343]: pidfile_write: ignore empty --pid-file
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557761b53800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557761b53800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557761b53800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557761b53800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557762995800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557762995800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557762995800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557762995800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557762995800 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 05:09:46 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 05:09:46 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1816088569' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 29 05:09:46 compute-0 happy_moore[91131]: pool 'vms' created
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 05:09:46 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 29 05:09:46 compute-0 ceph-osd[89151]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 05:09:46 compute-0 ceph-osd[89151]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 29 05:09:46 compute-0 ceph-osd[89151]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 05:09:46 compute-0 systemd[1]: libpod-55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a.scope: Deactivated successfully.
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:46 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:46 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:46 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 12 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 29 05:09:46 compute-0 podman[91108]: 2025-11-29 05:09:46.405937038 +0000 UTC m=+1.137422566 container died 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:09:46 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 05:09:46 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:46 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:46 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:46 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e7598c0f4f42a120de85fc25c2eec914c851943d7550ae09de03f9a8fc4372d-merged.mount: Deactivated successfully.
Nov 29 05:09:46 compute-0 sudo[91364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:46 compute-0 sudo[91364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:46 compute-0 sudo[91364]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:46 compute-0 podman[91108]: 2025-11-29 05:09:46.538078572 +0000 UTC m=+1.269564110 container remove 55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a (image=quay.io/ceph/ceph:v18, name=happy_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 05:09:46 compute-0 systemd[1]: libpod-conmon-55620699fdc222549414b123c03861c06348d689cc387ce7b8ada32e6f79337a.scope: Deactivated successfully.
Nov 29 05:09:46 compute-0 sudo[91105]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:46 compute-0 sudo[91394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:46 compute-0 sudo[91394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:46 compute-0 sudo[91394]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557761b53800 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 05:09:46 compute-0 sudo[91419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:46 compute-0 sudo[91419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:46 compute-0 sudo[91419]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:46 compute-0 sudo[91482]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swrrlwvalkqgvxejszhbszrqfugemljh ; /usr/bin/python3'
Nov 29 05:09:46 compute-0 sudo[91482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:46 compute-0 sudo[91458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:09:46 compute-0 sudo[91458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:46 compute-0 ceph-osd[91343]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 29 05:09:46 compute-0 python3[91494]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:46 compute-0 ceph-osd[91343]: load: jerasure load: lrc 
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:46 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 05:09:46 compute-0 podman[91503]: 2025-11-29 05:09:46.962703129 +0000 UTC m=+0.051946834 container create 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 05:09:47 compute-0 systemd[1]: Started libpod-conmon-72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd.scope.
Nov 29 05:09:47 compute-0 podman[91503]: 2025-11-29 05:09:46.940284054 +0000 UTC m=+0.029527769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da229cbfedf1daaedab80c22b8f90dcec95f8eb700a15b7ac0e9fd06b2bc16ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da229cbfedf1daaedab80c22b8f90dcec95f8eb700a15b7ac0e9fd06b2bc16ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:47 compute-0 podman[91503]: 2025-11-29 05:09:47.105492312 +0000 UTC m=+0.194736097 container init 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:47 compute-0 podman[91503]: 2025-11-29 05:09:47.115662139 +0000 UTC m=+0.204905834 container start 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:09:47 compute-0 podman[91503]: 2025-11-29 05:09:47.133074243 +0000 UTC m=+0.222318018 container attach 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 05:09:47 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1814125376; not ready for session (expect reconnect)
Nov 29 05:09:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:47 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:47 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:47 compute-0 podman[91563]: 2025-11-29 05:09:47.264020297 +0000 UTC m=+0.046831319 container create e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:09:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v36: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 05:09:47 compute-0 systemd[1]: Started libpod-conmon-e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81.scope.
Nov 29 05:09:47 compute-0 podman[91563]: 2025-11-29 05:09:47.246227925 +0000 UTC m=+0.029038977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 29 05:09:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 05:09:47 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1816088569' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:47 compute-0 ceph-mon[75176]: osdmap e12: 3 total, 1 up, 3 in
Nov 29 05:09:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 05:09:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:47 compute-0 podman[91563]: 2025-11-29 05:09:47.409549477 +0000 UTC m=+0.192360519 container init e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:09:47 compute-0 podman[91563]: 2025-11-29 05:09:47.421537669 +0000 UTC m=+0.204348741 container start e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:47 compute-0 recursing_panini[91580]: 167 167
Nov 29 05:09:47 compute-0 systemd[1]: libpod-e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81.scope: Deactivated successfully.
Nov 29 05:09:47 compute-0 conmon[91580]: conmon e066c8afb37c27a5ddc7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81.scope/container/memory.events
Nov 29 05:09:47 compute-0 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 05:09:47 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 05:09:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 29 05:09:47 compute-0 podman[91563]: 2025-11-29 05:09:47.450924424 +0000 UTC m=+0.233735466 container attach e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:47 compute-0 podman[91563]: 2025-11-29 05:09:47.451908838 +0000 UTC m=+0.234719870 container died e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 05:09:47 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 29 05:09:47 compute-0 ceph-osd[91343]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 05:09:47 compute-0 ceph-osd[91343]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1cc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluefs mount
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluefs mount shared_bdev_used = 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: RocksDB version: 7.9.2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Git sha 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: DB SUMMARY
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: DB Session ID:  6JZI3E9CISG6DWQI9SRA
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: CURRENT file:  CURRENT
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                         Options.error_if_exists: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.create_if_missing: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                                     Options.env: 0x5577629e7d50
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                                Options.info_log: 0x557761bde800
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                              Options.statistics: (nil)
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.use_fsync: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                              Options.db_log_dir: 
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.write_buffer_manager: 0x557762af8460
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.unordered_write: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.row_cache: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                              Options.wal_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.two_write_queues: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.wal_compression: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.atomic_flush: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.max_background_jobs: 4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.max_background_compactions: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.max_subcompactions: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.max_open_files: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Compression algorithms supported:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kZSTD supported: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kXpressCompression supported: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kBZip2Compression supported: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kLZ4Compression supported: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kZlibCompression supported: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kSnappyCompression supported: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:47 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:47 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:47 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:47 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=12) [0] r=0 lpr=12 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdee60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 58ce6855-c8a7-4728-93f6-6b17cab7a3d9
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987506293, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987506697, "job": 1, "event": "recovery_finished"}
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: freelist init
Nov 29 05:09:47 compute-0 ceph-osd[91343]: freelist _read_cfg
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluefs umount
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) close
Nov 29 05:09:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a0b31037095fa5f82a093b26ada942a9c2184aa4dc8c737844f1d46b5dc590e-merged.mount: Deactivated successfully.
Nov 29 05:09:47 compute-0 podman[91563]: 2025-11-29 05:09:47.583504559 +0000 UTC m=+0.366315621 container remove e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Nov 29 05:09:47 compute-0 systemd[1]: libpod-conmon-e066c8afb37c27a5ddc760333eceef46a2e27177c8b260d654c5f0f6221ffc81.scope: Deactivated successfully.
Nov 29 05:09:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 05:09:47 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805837806' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bdev(0x557761d1d400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluefs mount
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluefs mount shared_bdev_used = 4718592
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: RocksDB version: 7.9.2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Git sha 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: DB SUMMARY
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: DB Session ID:  6JZI3E9CISG6DWQI9SRB
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: CURRENT file:  CURRENT
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                         Options.error_if_exists: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.create_if_missing: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                                     Options.env: 0x557762ba8460
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                                Options.info_log: 0x557761bdf200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                              Options.statistics: (nil)
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.use_fsync: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                              Options.db_log_dir: 
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.write_buffer_manager: 0x557762af8460
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.unordered_write: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.row_cache: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                              Options.wal_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.two_write_queues: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.wal_compression: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.atomic_flush: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.max_background_jobs: 4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.max_background_compactions: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.max_subcompactions: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.max_open_files: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Compression algorithms supported:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kZSTD supported: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kXpressCompression supported: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kBZip2Compression supported: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kLZ4Compression supported: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kZlibCompression supported: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kLZ4HCCompression supported: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         kSnappyCompression supported: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bde9c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdef60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdef60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:           Options.merge_operator: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557761bdef60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x557761bc6430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.compression: LZ4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.num_levels: 7
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.bloom_locality: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                               Options.ttl: 2592000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                       Options.enable_blob_files: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                           Options.min_blob_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 58ce6855-c8a7-4728-93f6-6b17cab7a3d9
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987758588, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987763688, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392987, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "58ce6855-c8a7-4728-93f6-6b17cab7a3d9", "db_session_id": "6JZI3E9CISG6DWQI9SRB", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987766409, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392987, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "58ce6855-c8a7-4728-93f6-6b17cab7a3d9", "db_session_id": "6JZI3E9CISG6DWQI9SRB", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987773044, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392987, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "58ce6855-c8a7-4728-93f6-6b17cab7a3d9", "db_session_id": "6JZI3E9CISG6DWQI9SRB", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764392987775805, "job": 1, "event": "recovery_finished"}
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 05:09:47 compute-0 podman[91820]: 2025-11-29 05:09:47.788213637 +0000 UTC m=+0.071411247 container create ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557762bb4000
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: DB pointer 0x557762ae9a00
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 29 05:09:47 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:09:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:09:47 compute-0 ceph-osd[91343]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 05:09:47 compute-0 ceph-osd[91343]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 05:09:47 compute-0 ceph-osd[91343]: _get_class not permitted to load lua
Nov 29 05:09:47 compute-0 ceph-osd[91343]: _get_class not permitted to load sdk
Nov 29 05:09:47 compute-0 ceph-osd[91343]: _get_class not permitted to load test_remote_reads
Nov 29 05:09:47 compute-0 ceph-osd[91343]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 05:09:47 compute-0 ceph-osd[91343]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 05:09:47 compute-0 ceph-osd[91343]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 05:09:47 compute-0 ceph-osd[91343]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 05:09:47 compute-0 ceph-osd[91343]: osd.2 0 load_pgs
Nov 29 05:09:47 compute-0 ceph-osd[91343]: osd.2 0 load_pgs opened 0 pgs
Nov 29 05:09:47 compute-0 ceph-osd[91343]: osd.2 0 log_to_monitors true
Nov 29 05:09:47 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2[91339]: 2025-11-29T05:09:47.837+0000 7f9a1eee4740 -1 osd.2 0 log_to_monitors true
Nov 29 05:09:47 compute-0 systemd[1]: Started libpod-conmon-ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28.scope.
Nov 29 05:09:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 29 05:09:47 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 05:09:47 compute-0 podman[91820]: 2025-11-29 05:09:47.762882241 +0000 UTC m=+0.046079901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aeef7f376e8b6d910292f51e5b98a927494e6eb3fbfe357a047b5f642f3f0f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aeef7f376e8b6d910292f51e5b98a927494e6eb3fbfe357a047b5f642f3f0f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aeef7f376e8b6d910292f51e5b98a927494e6eb3fbfe357a047b5f642f3f0f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6aeef7f376e8b6d910292f51e5b98a927494e6eb3fbfe357a047b5f642f3f0f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:47 compute-0 podman[91820]: 2025-11-29 05:09:47.888913137 +0000 UTC m=+0.172110747 container init ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:09:47 compute-0 podman[91820]: 2025-11-29 05:09:47.898207692 +0000 UTC m=+0.181405302 container start ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:47 compute-0 podman[91820]: 2025-11-29 05:09:47.902004145 +0000 UTC m=+0.185201785 container attach ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 27.826 iops: 7123.470 elapsed_sec: 0.421
Nov 29 05:09:48 compute-0 ceph-osd[90181]: log_channel(cluster) log [WRN] : OSD bench result of 7123.469535 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 0 waiting for initial osdmap
Nov 29 05:09:48 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1[90177]: 2025-11-29T05:09:48.194+0000 7f1f6d487640 -1 osd.1 0 waiting for initial osdmap
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 13 check_osdmap_features require_osd_release unknown -> reef
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 13 set_numa_affinity not setting numa affinity
Nov 29 05:09:48 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-1[90177]: 2025-11-29T05:09:48.215+0000 7f1f68aaf640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 29 05:09:48 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1814125376; not ready for session (expect reconnect)
Nov 29 05:09:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:48 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:48 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 05:09:48 compute-0 ceph-mon[75176]: pgmap v36: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 29 05:09:48 compute-0 ceph-mon[75176]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 05:09:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 05:09:48 compute-0 ceph-mon[75176]: osdmap e13: 3 total, 1 up, 3 in
Nov 29 05:09:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:48 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2805837806' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:48 compute-0 ceph-mon[75176]: from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 05:09:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 29 05:09:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2805837806' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 05:09:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Nov 29 05:09:48 compute-0 great_sutherland[91541]: pool 'volumes' created
Nov 29 05:09:48 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376] boot
Nov 29 05:09:48 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Nov 29 05:09:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 05:09:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 05:09:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 29 05:09:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 05:09:48 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:48 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:48 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 14 state: booting -> active
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:09:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:09:48 compute-0 systemd[1]: libpod-72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd.scope: Deactivated successfully.
Nov 29 05:09:48 compute-0 podman[91503]: 2025-11-29 05:09:48.517975707 +0000 UTC m=+1.607219392 container died 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 05:09:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-da229cbfedf1daaedab80c22b8f90dcec95f8eb700a15b7ac0e9fd06b2bc16ad-merged.mount: Deactivated successfully.
Nov 29 05:09:48 compute-0 podman[91503]: 2025-11-29 05:09:48.561142947 +0000 UTC m=+1.650386642 container remove 72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd (image=quay.io/ceph/ceph:v18, name=great_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 05:09:48 compute-0 systemd[1]: libpod-conmon-72c9476b8904eb3993cc29c087eab93bd67d34c0855d853f4b037a3defc40fcd.scope: Deactivated successfully.
Nov 29 05:09:48 compute-0 sudo[91482]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:48 compute-0 sudo[92119]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeejncfpnvmiyrtmiqckxfymxozswnwy ; /usr/bin/python3'
Nov 29 05:09:48 compute-0 sudo[92119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]: {
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "osd_id": 0,
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "type": "bluestore"
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:     },
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "osd_id": 1,
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "type": "bluestore"
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:     },
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "osd_id": 2,
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:         "type": "bluestore"
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]:     }
Nov 29 05:09:48 compute-0 jolly_sinoussi[92055]: }
Nov 29 05:09:48 compute-0 systemd[1]: libpod-ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28.scope: Deactivated successfully.
Nov 29 05:09:48 compute-0 podman[91820]: 2025-11-29 05:09:48.786402156 +0000 UTC m=+1.069599766 container died ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6aeef7f376e8b6d910292f51e5b98a927494e6eb3fbfe357a047b5f642f3f0f5-merged.mount: Deactivated successfully.
Nov 29 05:09:48 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 05:09:48 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 05:09:48 compute-0 podman[91820]: 2025-11-29 05:09:48.85035545 +0000 UTC m=+1.133553060 container remove ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_sinoussi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:09:48 compute-0 systemd[1]: libpod-conmon-ec2bfe9ffcdc8a5d8c85f9641fcc9d8d4d7cfa30e6cc7f4ef0686ab197173c28.scope: Deactivated successfully.
Nov 29 05:09:48 compute-0 python3[92123]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:48 compute-0 sudo[91458]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:48 compute-0 podman[92145]: 2025-11-29 05:09:48.923065149 +0000 UTC m=+0.041419558 container create 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 29 05:09:48 compute-0 systemd[1]: Started libpod-conmon-761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c.scope.
Nov 29 05:09:48 compute-0 sudo[92155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:48 compute-0 sudo[92155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:48 compute-0 sudo[92155]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f978630d7e3fd0ada1631ce5ddcbd64f9b969d9bbaa8e00c3abef90fb3aa7df8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f978630d7e3fd0ada1631ce5ddcbd64f9b969d9bbaa8e00c3abef90fb3aa7df8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:48 compute-0 podman[92145]: 2025-11-29 05:09:48.992001806 +0000 UTC m=+0.110356245 container init 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:09:48 compute-0 podman[92145]: 2025-11-29 05:09:48.900622553 +0000 UTC m=+0.018976992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:48 compute-0 podman[92145]: 2025-11-29 05:09:48.998704749 +0000 UTC m=+0.117059158 container start 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:49 compute-0 podman[92145]: 2025-11-29 05:09:49.001618459 +0000 UTC m=+0.119972868 container attach 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:49 compute-0 sudo[92188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:09:49 compute-0 sudo[92188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:49 compute-0 sudo[92188]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:49 compute-0 sudo[92215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:49 compute-0 sudo[92215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:49 compute-0 sudo[92215]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:49 compute-0 sudo[92240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:49 compute-0 sudo[92240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:49 compute-0 sudo[92240]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:49 compute-0 sudo[92265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:49 compute-0 sudo[92265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:49 compute-0 sudo[92265]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:49 compute-0 sudo[92290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:09:49 compute-0 sudo[92290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v39: 3 pgs: 2 creating+peering, 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 29 05:09:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 29 05:09:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 05:09:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Nov 29 05:09:49 compute-0 ceph-osd[91343]: osd.2 0 done with init, starting boot process
Nov 29 05:09:49 compute-0 ceph-osd[91343]: osd.2 0 start_boot
Nov 29 05:09:49 compute-0 ceph-osd[91343]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 05:09:49 compute-0 ceph-osd[91343]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 05:09:49 compute-0 ceph-osd[91343]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 05:09:49 compute-0 ceph-osd[91343]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 05:09:49 compute-0 ceph-osd[91343]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 29 05:09:49 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Nov 29 05:09:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:49 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:49 compute-0 ceph-mon[75176]: OSD bench result of 7123.469535 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 05:09:49 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2805837806' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:49 compute-0 ceph-mon[75176]: from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 05:09:49 compute-0 ceph-mon[75176]: osd.1 [v2:192.168.122.100:6806/1814125376,v1:192.168.122.100:6807/1814125376] boot
Nov 29 05:09:49 compute-0 ceph-mon[75176]: osdmap e14: 3 total, 2 up, 3 in
Nov 29 05:09:49 compute-0 ceph-mon[75176]: from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 05:09:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 05:09:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:49 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/688023254; not ready for session (expect reconnect)
Nov 29 05:09:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 15 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:09:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:09:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:49 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 05:09:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3117231839' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:49 compute-0 ceph-mgr[75473]: [devicehealth INFO root] creating main.db for devicehealth
Nov 29 05:09:49 compute-0 podman[92411]: 2025-11-29 05:09:49.736001672 +0000 UTC m=+0.105483367 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:09:49 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Check health
Nov 29 05:09:49 compute-0 ceph-mgr[75473]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 29 05:09:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 05:09:49 compute-0 sudo[92442]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 29 05:09:49 compute-0 sudo[92442]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 05:09:49 compute-0 sudo[92442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 29 05:09:49 compute-0 podman[92411]: 2025-11-29 05:09:49.858841 +0000 UTC m=+0.228322725 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:09:49 compute-0 sudo[92442]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 05:09:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 05:09:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 05:09:50 compute-0 sudo[92290]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:50 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/688023254; not ready for session (expect reconnect)
Nov 29 05:09:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 29 05:09:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:50 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:50 compute-0 sudo[92542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:50 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3117231839' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:50 compute-0 sudo[92542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 29 05:09:50 compute-0 clever_lalande[92184]: pool 'backups' created
Nov 29 05:09:50 compute-0 sudo[92542]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:50 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 29 05:09:50 compute-0 podman[92145]: 2025-11-29 05:09:50.546175277 +0000 UTC m=+1.664529676 container died 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:50 compute-0 systemd[1]: libpod-761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c.scope: Deactivated successfully.
Nov 29 05:09:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:50 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:50 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:50 compute-0 ceph-mon[75176]: pgmap v39: 3 pgs: 2 creating+peering, 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Nov 29 05:09:50 compute-0 ceph-mon[75176]: from='osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 05:09:50 compute-0 ceph-mon[75176]: osdmap e15: 3 total, 2 up, 3 in
Nov 29 05:09:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:50 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3117231839' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:50 compute-0 ceph-mon[75176]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 05:09:50 compute-0 ceph-mon[75176]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 05:09:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 05:09:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f978630d7e3fd0ada1631ce5ddcbd64f9b969d9bbaa8e00c3abef90fb3aa7df8-merged.mount: Deactivated successfully.
Nov 29 05:09:50 compute-0 sudo[92568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:50 compute-0 sudo[92568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:50 compute-0 sudo[92568]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:50 compute-0 podman[92145]: 2025-11-29 05:09:50.650599776 +0000 UTC m=+1.768954185 container remove 761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c (image=quay.io/ceph/ceph:v18, name=clever_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:09:50 compute-0 sudo[92604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:50 compute-0 sudo[92604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:50 compute-0 systemd[1]: libpod-conmon-761d6edfb7d7d48605d3bc70ab025eb265e5e103948c3a488124e186e3c44b7c.scope: Deactivated successfully.
Nov 29 05:09:50 compute-0 sudo[92604]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:50 compute-0 sudo[92119]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:50 compute-0 sudo[92629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- inventory --format=json-pretty --filter-for-batch
Nov 29 05:09:50 compute-0 sudo[92629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:50 compute-0 sudo[92677]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnnnhvjwgdnalxmzfexwzhmrhrwdatqa ; /usr/bin/python3'
Nov 29 05:09:50 compute-0 sudo[92677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:50 compute-0 python3[92679]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:50 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:09:50 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 15 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=15 pruub=12.508188248s) [] r=-1 lpr=15 pi=[12,15)/1 crt=0'0 mlcod 0'0 active pruub 25.552719116s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:09:50 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 16 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=15 pruub=12.508188248s) [] r=-1 lpr=15 pi=[12,15)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.552719116s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:09:51 compute-0 podman[92706]: 2025-11-29 05:09:51.054534141 +0000 UTC m=+0.077077345 container create 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:51 compute-0 podman[92706]: 2025-11-29 05:09:51.015961823 +0000 UTC m=+0.038504947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:51 compute-0 podman[92731]: 2025-11-29 05:09:51.116596621 +0000 UTC m=+0.070864375 container create 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:51 compute-0 systemd[1]: Started libpod-conmon-618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d.scope.
Nov 29 05:09:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63e12078ba9fd8577ff60397d62bf4e9eb6f1c9396ea21236e271c3f199ff62/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a63e12078ba9fd8577ff60397d62bf4e9eb6f1c9396ea21236e271c3f199ff62/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:51 compute-0 podman[92706]: 2025-11-29 05:09:51.162974109 +0000 UTC m=+0.185517273 container init 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:09:51 compute-0 podman[92706]: 2025-11-29 05:09:51.172169663 +0000 UTC m=+0.194712767 container start 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:09:51 compute-0 systemd[1]: Started libpod-conmon-8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe.scope.
Nov 29 05:09:51 compute-0 podman[92731]: 2025-11-29 05:09:51.09642045 +0000 UTC m=+0.050688194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:51 compute-0 podman[92706]: 2025-11-29 05:09:51.194206478 +0000 UTC m=+0.216749602 container attach 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 05:09:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:51 compute-0 podman[92731]: 2025-11-29 05:09:51.233977926 +0000 UTC m=+0.188245660 container init 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:51 compute-0 podman[92731]: 2025-11-29 05:09:51.245052325 +0000 UTC m=+0.199320039 container start 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:51 compute-0 angry_mendeleev[92753]: 167 167
Nov 29 05:09:51 compute-0 systemd[1]: libpod-8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe.scope: Deactivated successfully.
Nov 29 05:09:51 compute-0 podman[92731]: 2025-11-29 05:09:51.264098548 +0000 UTC m=+0.218366272 container attach 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:51 compute-0 podman[92731]: 2025-11-29 05:09:51.265454781 +0000 UTC m=+0.219722545 container died 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v42: 4 pgs: 1 unknown, 1 active+clean, 2 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 05:09:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6cd962b68267d1b418844a975e45ce825dd080630246223f04eee294dff3318-merged.mount: Deactivated successfully.
Nov 29 05:09:51 compute-0 podman[92731]: 2025-11-29 05:09:51.369996374 +0000 UTC m=+0.324264088 container remove 8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:09:51 compute-0 systemd[1]: libpod-conmon-8fe2906e6358ef28e88ebb85bfdb8c6d032784ed5cde13e1ebb646224a1c1cfe.scope: Deactivated successfully.
Nov 29 05:09:51 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/688023254; not ready for session (expect reconnect)
Nov 29 05:09:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:51 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:51 compute-0 podman[92778]: 2025-11-29 05:09:51.566008641 +0000 UTC m=+0.064804177 container create dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 29 05:09:51 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.csskcz(active, since 70s)
Nov 29 05:09:51 compute-0 podman[92778]: 2025-11-29 05:09:51.527995327 +0000 UTC m=+0.026790823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:51 compute-0 systemd[1]: Started libpod-conmon-dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766.scope.
Nov 29 05:09:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Nov 29 05:09:51 compute-0 ceph-mon[75176]: purged_snaps scrub starts
Nov 29 05:09:51 compute-0 ceph-mon[75176]: purged_snaps scrub ok
Nov 29 05:09:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:51 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3117231839' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:51 compute-0 ceph-mon[75176]: osdmap e16: 3 total, 2 up, 3 in
Nov 29 05:09:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03df108ba0145d5eb22805c256c8aac2d3a339e335a68d76c724745d3104b308/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03df108ba0145d5eb22805c256c8aac2d3a339e335a68d76c724745d3104b308/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03df108ba0145d5eb22805c256c8aac2d3a339e335a68d76c724745d3104b308/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03df108ba0145d5eb22805c256c8aac2d3a339e335a68d76c724745d3104b308/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:51 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Nov 29 05:09:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:51 compute-0 podman[92778]: 2025-11-29 05:09:51.685522898 +0000 UTC m=+0.184318404 container init dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:51 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:51 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:09:51 compute-0 podman[92778]: 2025-11-29 05:09:51.698287988 +0000 UTC m=+0.197083484 container start dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:51 compute-0 podman[92778]: 2025-11-29 05:09:51.715128578 +0000 UTC m=+0.213924074 container attach dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 05:09:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 05:09:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2479407788' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:52 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/688023254; not ready for session (expect reconnect)
Nov 29 05:09:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:52 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 29 05:09:52 compute-0 ceph-mon[75176]: pgmap v42: 4 pgs: 1 unknown, 1 active+clean, 2 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 05:09:52 compute-0 ceph-mon[75176]: mgrmap e9: compute-0.csskcz(active, since 70s)
Nov 29 05:09:52 compute-0 ceph-mon[75176]: osdmap e17: 3 total, 2 up, 3 in
Nov 29 05:09:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:52 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2479407788' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2479407788' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Nov 29 05:09:52 compute-0 sharp_lovelace[92747]: pool 'images' created
Nov 29 05:09:52 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Nov 29 05:09:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:52 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:52 compute-0 systemd[1]: libpod-618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d.scope: Deactivated successfully.
Nov 29 05:09:52 compute-0 conmon[92747]: conmon 618dc0b63e9825818ac7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d.scope/container/memory.events
Nov 29 05:09:52 compute-0 podman[92706]: 2025-11-29 05:09:52.706010848 +0000 UTC m=+1.728553992 container died 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a63e12078ba9fd8577ff60397d62bf4e9eb6f1c9396ea21236e271c3f199ff62-merged.mount: Deactivated successfully.
Nov 29 05:09:52 compute-0 podman[92706]: 2025-11-29 05:09:52.866561674 +0000 UTC m=+1.889104778 container remove 618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d (image=quay.io/ceph/ceph:v18, name=sharp_lovelace, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:09:52 compute-0 systemd[1]: libpod-conmon-618dc0b63e9825818ac7766573de8c7a7cd33e0721889148e0a1f37fe8b37b1d.scope: Deactivated successfully.
Nov 29 05:09:52 compute-0 sudo[92677]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]: [
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:     {
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:         "available": false,
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:         "ceph_device": false,
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:         "lsm_data": {},
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:         "lvs": [],
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:         "path": "/dev/sr0",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:         "rejected_reasons": [
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "Has a FileSystem",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "Insufficient space (<5GB)"
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:         ],
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:         "sys_api": {
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "actuators": null,
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "device_nodes": "sr0",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "devname": "sr0",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "human_readable_size": "482.00 KB",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "id_bus": "ata",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "model": "QEMU DVD-ROM",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "nr_requests": "2",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "parent": "/dev/sr0",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "partitions": {},
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "path": "/dev/sr0",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "removable": "1",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "rev": "2.5+",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "ro": "0",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "rotational": "1",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "sas_address": "",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "sas_device_handle": "",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "scheduler_mode": "mq-deadline",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "sectors": 0,
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "sectorsize": "2048",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "size": 493568.0,
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "support_discard": "2048",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "type": "disk",
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:             "vendor": "QEMU"
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:         }
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]:     }
Nov 29 05:09:53 compute-0 distracted_mccarthy[92814]: ]
Nov 29 05:09:53 compute-0 sudo[94401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucjkxcjrktkwpfcwwrelxwhophenejeu ; /usr/bin/python3'
Nov 29 05:09:53 compute-0 sudo[94401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:53 compute-0 systemd[1]: libpod-dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766.scope: Deactivated successfully.
Nov 29 05:09:53 compute-0 systemd[1]: libpod-dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766.scope: Consumed 1.370s CPU time.
Nov 29 05:09:53 compute-0 conmon[92814]: conmon dbfc17c47abf9d2d84eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766.scope/container/memory.events
Nov 29 05:09:53 compute-0 podman[92778]: 2025-11-29 05:09:53.056106733 +0000 UTC m=+1.554902249 container died dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 29 05:09:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-03df108ba0145d5eb22805c256c8aac2d3a339e335a68d76c724745d3104b308-merged.mount: Deactivated successfully.
Nov 29 05:09:53 compute-0 podman[92778]: 2025-11-29 05:09:53.143545801 +0000 UTC m=+1.642341287 container remove dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mccarthy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Nov 29 05:09:53 compute-0 systemd[1]: libpod-conmon-dbfc17c47abf9d2d84eba5e5cefc5c113496fd236c637963579a64db790bb766.scope: Deactivated successfully.
Nov 29 05:09:53 compute-0 python3[94487]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:53 compute-0 sudo[92629]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mgr[75473]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Nov 29 05:09:53 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 05:09:53 compute-0 ceph-mgr[75473]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 29 05:09:53 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:53 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 27cd2587-bfa3-40be-a705-31cc158fd97c does not exist
Nov 29 05:09:53 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 58fdda4e-3ead-4ba1-a09e-92ba641fc131 does not exist
Nov 29 05:09:53 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 3bd75c38-28e3-414f-997f-544c1302689f does not exist
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:53 compute-0 podman[94502]: 2025-11-29 05:09:53.252153622 +0000 UTC m=+0.065925255 container create b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 05:09:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v45: 5 pgs: 2 unknown, 1 active+clean, 2 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 05:09:53 compute-0 systemd[1]: Started libpod-conmon-b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd.scope.
Nov 29 05:09:53 compute-0 podman[94502]: 2025-11-29 05:09:53.221207909 +0000 UTC m=+0.034979562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:53 compute-0 sudo[94515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35bef553a812b3d74e3b75ecbc5c70c8ad9226411aa2789731e5d1c723c53a0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35bef553a812b3d74e3b75ecbc5c70c8ad9226411aa2789731e5d1c723c53a0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:53 compute-0 sudo[94515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:53 compute-0 sudo[94515]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:53 compute-0 podman[94502]: 2025-11-29 05:09:53.356678454 +0000 UTC m=+0.170450107 container init b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:53 compute-0 podman[94502]: 2025-11-29 05:09:53.366631376 +0000 UTC m=+0.180403259 container start b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:53 compute-0 podman[94502]: 2025-11-29 05:09:53.377929981 +0000 UTC m=+0.191701614 container attach b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:53 compute-0 ceph-osd[91343]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 25.909 iops: 6632.612 elapsed_sec: 0.452
Nov 29 05:09:53 compute-0 ceph-osd[91343]: log_channel(cluster) log [WRN] : OSD bench result of 6632.611728 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 05:09:53 compute-0 ceph-osd[91343]: osd.2 0 waiting for initial osdmap
Nov 29 05:09:53 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2[91339]: 2025-11-29T05:09:53.391+0000 7f9a1b67b640 -1 osd.2 0 waiting for initial osdmap
Nov 29 05:09:53 compute-0 ceph-osd[91343]: osd.2 18 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 05:09:53 compute-0 ceph-osd[91343]: osd.2 18 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 29 05:09:53 compute-0 ceph-osd[91343]: osd.2 18 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 05:09:53 compute-0 ceph-osd[91343]: osd.2 18 check_osdmap_features require_osd_release unknown -> reef
Nov 29 05:09:53 compute-0 sudo[94545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:53 compute-0 sudo[94545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:53 compute-0 sudo[94545]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:53 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-osd-2[91339]: 2025-11-29T05:09:53.425+0000 7f9a1648c640 -1 osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 05:09:53 compute-0 ceph-osd[91343]: osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 05:09:53 compute-0 ceph-osd[91343]: osd.2 18 set_numa_affinity not setting numa affinity
Nov 29 05:09:53 compute-0 ceph-osd[91343]: osd.2 18 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 29 05:09:53 compute-0 sudo[94571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:53 compute-0 sudo[94571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:53 compute-0 sudo[94571]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:53 compute-0 ceph-mgr[75473]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/688023254; not ready for session (expect reconnect)
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mgr[75473]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 05:09:53 compute-0 sudo[94598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:09:53 compute-0 sudo[94598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2479407788' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:53 compute-0 ceph-mon[75176]: osdmap e18: 3 total, 2 up, 3 in
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:09:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:53 compute-0 podman[94682]: 2025-11-29 05:09:53.861291327 +0000 UTC m=+0.050568311 container create c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:53 compute-0 systemd[1]: Started libpod-conmon-c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205.scope.
Nov 29 05:09:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:53 compute-0 podman[94682]: 2025-11-29 05:09:53.834973457 +0000 UTC m=+0.024250431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 05:09:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/344261826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:53 compute-0 podman[94682]: 2025-11-29 05:09:53.951148502 +0000 UTC m=+0.140425496 container init c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:53 compute-0 podman[94682]: 2025-11-29 05:09:53.957226961 +0000 UTC m=+0.146503915 container start c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:09:53 compute-0 podman[94682]: 2025-11-29 05:09:53.960880229 +0000 UTC m=+0.150157223 container attach c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:53 compute-0 zealous_merkle[94698]: 167 167
Nov 29 05:09:53 compute-0 systemd[1]: libpod-c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205.scope: Deactivated successfully.
Nov 29 05:09:53 compute-0 podman[94682]: 2025-11-29 05:09:53.963165255 +0000 UTC m=+0.152442199 container died c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:09:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ba5f51c46fc527eee6b6978a0409f2d0fea14632b2f7f93715a3b59dd0c788f-merged.mount: Deactivated successfully.
Nov 29 05:09:54 compute-0 podman[94682]: 2025-11-29 05:09:54.001357184 +0000 UTC m=+0.190634148 container remove c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:54 compute-0 systemd[1]: libpod-conmon-c56a93616d33ba3d01fdfeb118d67a1bdf7bfc56c229ed1775e0504693a0a205.scope: Deactivated successfully.
Nov 29 05:09:54 compute-0 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 05:09:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:54 compute-0 podman[94726]: 2025-11-29 05:09:54.178973184 +0000 UTC m=+0.048112071 container create 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 05:09:54 compute-0 systemd[1]: Started libpod-conmon-5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340.scope.
Nov 29 05:09:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 29 05:09:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:54 compute-0 podman[94726]: 2025-11-29 05:09:54.155225336 +0000 UTC m=+0.024364223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:54 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/344261826' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Nov 29 05:09:54 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254] boot
Nov 29 05:09:54 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Nov 29 05:09:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 05:09:54 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:54 compute-0 brave_haslett[94540]: pool 'cephfs.cephfs.meta' created
Nov 29 05:09:54 compute-0 ceph-osd[91343]: osd.2 19 state: booting -> active
Nov 29 05:09:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[18,19)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:09:54 compute-0 podman[94502]: 2025-11-29 05:09:54.26844756 +0000 UTC m=+1.082219203 container died b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:09:54 compute-0 podman[94726]: 2025-11-29 05:09:54.274059617 +0000 UTC m=+0.143198504 container init 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:54 compute-0 systemd[1]: libpod-b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd.scope: Deactivated successfully.
Nov 29 05:09:54 compute-0 podman[94726]: 2025-11-29 05:09:54.281496847 +0000 UTC m=+0.150635714 container start 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:54 compute-0 podman[94726]: 2025-11-29 05:09:54.286007468 +0000 UTC m=+0.155146335 container attach 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b35bef553a812b3d74e3b75ecbc5c70c8ad9226411aa2789731e5d1c723c53a0-merged.mount: Deactivated successfully.
Nov 29 05:09:54 compute-0 podman[94502]: 2025-11-29 05:09:54.321667025 +0000 UTC m=+1.135438658 container remove b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd (image=quay.io/ceph/ceph:v18, name=brave_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:09:54 compute-0 systemd[1]: libpod-conmon-b8864b9423a8cd7548e5728ef326925db1a5a5085298256a0448352c97e9babd.scope: Deactivated successfully.
Nov 29 05:09:54 compute-0 sudo[94401]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:54 compute-0 sudo[94783]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jodzozezszmpzymvjdhdoposzuyqyygp ; /usr/bin/python3'
Nov 29 05:09:54 compute-0 sudo[94783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:54 compute-0 python3[94785]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:54 compute-0 podman[94786]: 2025-11-29 05:09:54.642082048 +0000 UTC m=+0.041742197 container create f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 05:09:54 compute-0 systemd[1]: Started libpod-conmon-f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020.scope.
Nov 29 05:09:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d09d11d71c6f8b237e799414f93a28cb82152221cf07a7e4cd78b9e14aecf74d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d09d11d71c6f8b237e799414f93a28cb82152221cf07a7e4cd78b9e14aecf74d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:54 compute-0 ceph-mon[75176]: Adjusting osd_memory_target on compute-0 to 43690k
Nov 29 05:09:54 compute-0 ceph-mon[75176]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Nov 29 05:09:54 compute-0 ceph-mon[75176]: pgmap v45: 5 pgs: 2 unknown, 1 active+clean, 2 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 29 05:09:54 compute-0 ceph-mon[75176]: OSD bench result of 6632.611728 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 05:09:54 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/344261826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:54 compute-0 ceph-mon[75176]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 05:09:54 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/344261826' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:54 compute-0 ceph-mon[75176]: osd.2 [v2:192.168.122.100:6810/688023254,v1:192.168.122.100:6811/688023254] boot
Nov 29 05:09:54 compute-0 ceph-mon[75176]: osdmap e19: 3 total, 3 up, 3 in
Nov 29 05:09:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 05:09:54 compute-0 podman[94786]: 2025-11-29 05:09:54.707922169 +0000 UTC m=+0.107582358 container init f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:54 compute-0 podman[94786]: 2025-11-29 05:09:54.713653479 +0000 UTC m=+0.113313628 container start f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 05:09:54 compute-0 podman[94786]: 2025-11-29 05:09:54.717236116 +0000 UTC m=+0.116896305 container attach f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:54 compute-0 podman[94786]: 2025-11-29 05:09:54.622581094 +0000 UTC m=+0.022241303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:55 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 19 pg[6.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:09:55 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 19 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19 pruub=8.281105995s) [2] r=-1 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.552719116s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:09:55 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 19 pg[2.0( empty local-lis/les=12/13 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19 pruub=8.280880928s) [2] r=-1 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 25.552719116s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:09:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19) [2] r=0 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:09:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 29 05:09:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 29 05:09:55 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 29 05:09:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=12/12 lis/c=12/12 les/c/f=13/13/0 sis=19) [2] r=0 lpr=19 pi=[12,19)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:09:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[18,19)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:09:55 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:09:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v48: 6 pgs: 1 creating+peering, 1 unknown, 4 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 29 05:09:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 05:09:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/746565820' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:55 compute-0 frosty_yonath[94742]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:09:55 compute-0 frosty_yonath[94742]: --> relative data size: 1.0
Nov 29 05:09:55 compute-0 frosty_yonath[94742]: --> All data devices are unavailable
Nov 29 05:09:55 compute-0 systemd[1]: libpod-5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340.scope: Deactivated successfully.
Nov 29 05:09:55 compute-0 systemd[1]: libpod-5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340.scope: Consumed 1.019s CPU time.
Nov 29 05:09:55 compute-0 podman[94726]: 2025-11-29 05:09:55.536310208 +0000 UTC m=+1.405449115 container died 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-410e9a38fb803738e1a8a7a147587ff553becd52113c105aa426ae93bc230a21-merged.mount: Deactivated successfully.
Nov 29 05:09:55 compute-0 podman[94726]: 2025-11-29 05:09:55.592928645 +0000 UTC m=+1.462067492 container remove 5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:09:55 compute-0 systemd[1]: libpod-conmon-5fec294d37e4710e05a1394740a30fcdc1fb94db351a862486cdb9cddfd39340.scope: Deactivated successfully.
Nov 29 05:09:55 compute-0 sudo[94598]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:55 compute-0 sudo[94866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:55 compute-0 sudo[94866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:55 compute-0 sudo[94866]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:55 compute-0 sudo[94891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:55 compute-0 sudo[94891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:55 compute-0 sudo[94891]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:55 compute-0 sudo[94916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:55 compute-0 sudo[94916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:55 compute-0 sudo[94916]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:55 compute-0 sudo[94941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:09:55 compute-0 sudo[94941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 29 05:09:56 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/746565820' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 29 05:09:56 compute-0 hungry_leakey[94802]: pool 'cephfs.cephfs.data' created
Nov 29 05:09:56 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 29 05:09:56 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 21 pg[7.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:09:56 compute-0 ceph-mon[75176]: osdmap e20: 3 total, 3 up, 3 in
Nov 29 05:09:56 compute-0 ceph-mon[75176]: pgmap v48: 6 pgs: 1 creating+peering, 1 unknown, 4 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 29 05:09:56 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/746565820' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 05:09:56 compute-0 systemd[1]: libpod-f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020.scope: Deactivated successfully.
Nov 29 05:09:56 compute-0 podman[94786]: 2025-11-29 05:09:56.275600379 +0000 UTC m=+1.675260528 container died f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:09:56 compute-0 podman[95006]: 2025-11-29 05:09:56.297565993 +0000 UTC m=+0.050402907 container create eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d09d11d71c6f8b237e799414f93a28cb82152221cf07a7e4cd78b9e14aecf74d-merged.mount: Deactivated successfully.
Nov 29 05:09:56 compute-0 podman[95006]: 2025-11-29 05:09:56.275050425 +0000 UTC m=+0.027887389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:56 compute-0 systemd[1]: Started libpod-conmon-eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba.scope.
Nov 29 05:09:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:56 compute-0 podman[94786]: 2025-11-29 05:09:56.485839262 +0000 UTC m=+1.885499421 container remove f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020 (image=quay.io/ceph/ceph:v18, name=hungry_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:09:56 compute-0 sudo[94783]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:56 compute-0 podman[95006]: 2025-11-29 05:09:56.667305515 +0000 UTC m=+0.420142449 container init eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:56 compute-0 podman[95006]: 2025-11-29 05:09:56.67530048 +0000 UTC m=+0.428137414 container start eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:56 compute-0 podman[95006]: 2025-11-29 05:09:56.67900032 +0000 UTC m=+0.431837274 container attach eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:56 compute-0 nice_shtern[95034]: 167 167
Nov 29 05:09:56 compute-0 systemd[1]: libpod-eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba.scope: Deactivated successfully.
Nov 29 05:09:56 compute-0 podman[95006]: 2025-11-29 05:09:56.681814699 +0000 UTC m=+0.434651613 container died eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:56 compute-0 sudo[95062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdyxbmifkecjnmhkqtqgvstzohdtdacn ; /usr/bin/python3'
Nov 29 05:09:56 compute-0 sudo[95062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ad2354e53d1422d8d5108d3d471a095c17a9914d7d6e1c6237393c553fc4f0e-merged.mount: Deactivated successfully.
Nov 29 05:09:56 compute-0 podman[95006]: 2025-11-29 05:09:56.727131011 +0000 UTC m=+0.479967935 container remove eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 05:09:56 compute-0 systemd[1]: libpod-conmon-eacbdb9cfb4e168fa272eefae7057377061f0357e62daba7f8267fb06cc6d2ba.scope: Deactivated successfully.
Nov 29 05:09:56 compute-0 systemd[1]: libpod-conmon-f822805d4b040fdf4527814cdd0fb3de879e3f741089fc87d79d212b02676020.scope: Deactivated successfully.
Nov 29 05:09:56 compute-0 python3[95066]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:56 compute-0 podman[95085]: 2025-11-29 05:09:56.948230488 +0000 UTC m=+0.105648750 container create f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:56 compute-0 podman[95085]: 2025-11-29 05:09:56.863820185 +0000 UTC m=+0.021238427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:56 compute-0 podman[95086]: 2025-11-29 05:09:56.87796873 +0000 UTC m=+0.028665769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:56 compute-0 podman[95086]: 2025-11-29 05:09:56.977533071 +0000 UTC m=+0.128230150 container create 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:09:56 compute-0 systemd[1]: Started libpod-conmon-f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7.scope.
Nov 29 05:09:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec9495060c104269bb19794effad620f10e6c708603f60af7947206c7b701a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec9495060c104269bb19794effad620f10e6c708603f60af7947206c7b701a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:57 compute-0 systemd[1]: Started libpod-conmon-57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb.scope.
Nov 29 05:09:57 compute-0 podman[95085]: 2025-11-29 05:09:57.027015924 +0000 UTC m=+0.184434196 container init f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 05:09:57 compute-0 podman[95085]: 2025-11-29 05:09:57.036988747 +0000 UTC m=+0.194406969 container start f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:09:57 compute-0 podman[95085]: 2025-11-29 05:09:57.040683747 +0000 UTC m=+0.198101979 container attach f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7a9e0df2e711745720f01c7d8b36752b8940354af5aa82242305cff1658a34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7a9e0df2e711745720f01c7d8b36752b8940354af5aa82242305cff1658a34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7a9e0df2e711745720f01c7d8b36752b8940354af5aa82242305cff1658a34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7a9e0df2e711745720f01c7d8b36752b8940354af5aa82242305cff1658a34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:57 compute-0 podman[95086]: 2025-11-29 05:09:57.067358786 +0000 UTC m=+0.218055825 container init 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:57 compute-0 podman[95086]: 2025-11-29 05:09:57.074498599 +0000 UTC m=+0.225195648 container start 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:09:57 compute-0 podman[95086]: 2025-11-29 05:09:57.078947627 +0000 UTC m=+0.229644686 container attach 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 29 05:09:57 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/746565820' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 05:09:57 compute-0 ceph-mon[75176]: osdmap e21: 3 total, 3 up, 3 in
Nov 29 05:09:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v50: 7 pgs: 1 creating+peering, 2 unknown, 4 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 29 05:09:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 29 05:09:57 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 29 05:09:57 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:09:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 29 05:09:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2552320646' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 05:09:57 compute-0 objective_sutherland[95120]: {
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:     "0": [
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:         {
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "devices": [
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "/dev/loop3"
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             ],
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_name": "ceph_lv0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_size": "21470642176",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "name": "ceph_lv0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "tags": {
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.cluster_name": "ceph",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.crush_device_class": "",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.encrypted": "0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.osd_id": "0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.type": "block",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.vdo": "0"
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             },
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "type": "block",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "vg_name": "ceph_vg0"
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:         }
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:     ],
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:     "1": [
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:         {
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "devices": [
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "/dev/loop4"
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             ],
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_name": "ceph_lv1",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_size": "21470642176",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "name": "ceph_lv1",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "tags": {
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.cluster_name": "ceph",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.crush_device_class": "",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.encrypted": "0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.osd_id": "1",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.type": "block",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.vdo": "0"
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             },
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "type": "block",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "vg_name": "ceph_vg1"
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:         }
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:     ],
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:     "2": [
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:         {
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "devices": [
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "/dev/loop5"
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             ],
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_name": "ceph_lv2",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_size": "21470642176",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "name": "ceph_lv2",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "tags": {
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.cluster_name": "ceph",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.crush_device_class": "",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.encrypted": "0",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.osd_id": "2",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.type": "block",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:                 "ceph.vdo": "0"
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             },
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "type": "block",
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:             "vg_name": "ceph_vg2"
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:         }
Nov 29 05:09:57 compute-0 objective_sutherland[95120]:     ]
Nov 29 05:09:57 compute-0 objective_sutherland[95120]: }
Nov 29 05:09:57 compute-0 systemd[1]: libpod-57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb.scope: Deactivated successfully.
Nov 29 05:09:57 compute-0 podman[95086]: 2025-11-29 05:09:57.858185771 +0000 UTC m=+1.008882820 container died 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:09:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d7a9e0df2e711745720f01c7d8b36752b8940354af5aa82242305cff1658a34-merged.mount: Deactivated successfully.
Nov 29 05:09:57 compute-0 podman[95086]: 2025-11-29 05:09:57.919099882 +0000 UTC m=+1.069796921 container remove 57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 29 05:09:57 compute-0 systemd[1]: libpod-conmon-57be793e139c41eab43da01958d3520298176988b6222560e8efa9b4031ed4fb.scope: Deactivated successfully.
Nov 29 05:09:57 compute-0 sudo[94941]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:58 compute-0 sudo[95162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:58 compute-0 sudo[95162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:58 compute-0 sudo[95162]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:58 compute-0 sudo[95187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:09:58 compute-0 sudo[95187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:58 compute-0 sudo[95187]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:58 compute-0 sudo[95212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:09:58 compute-0 sudo[95212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:58 compute-0 sudo[95212]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 29 05:09:58 compute-0 ceph-mon[75176]: pgmap v50: 7 pgs: 1 creating+peering, 2 unknown, 4 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Nov 29 05:09:58 compute-0 ceph-mon[75176]: osdmap e22: 3 total, 3 up, 3 in
Nov 29 05:09:58 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2552320646' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 05:09:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2552320646' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 05:09:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 29 05:09:58 compute-0 awesome_engelbart[95115]: enabled application 'rbd' on pool 'vms'
Nov 29 05:09:58 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 29 05:09:58 compute-0 sudo[95237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:09:58 compute-0 sudo[95237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:09:58 compute-0 systemd[1]: libpod-f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7.scope: Deactivated successfully.
Nov 29 05:09:58 compute-0 podman[95085]: 2025-11-29 05:09:58.320811983 +0000 UTC m=+1.478230215 container died f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 29 05:09:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec9495060c104269bb19794effad620f10e6c708603f60af7947206c7b701a2-merged.mount: Deactivated successfully.
Nov 29 05:09:58 compute-0 podman[95085]: 2025-11-29 05:09:58.370626675 +0000 UTC m=+1.528044897 container remove f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7 (image=quay.io/ceph/ceph:v18, name=awesome_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:09:58 compute-0 systemd[1]: libpod-conmon-f99209abe00f0b8387651894a9023be0d1b0710fca7f7491869e2b36e23fa6b7.scope: Deactivated successfully.
Nov 29 05:09:58 compute-0 sudo[95062]: pam_unix(sudo:session): session closed for user root
Nov 29 05:09:58 compute-0 sudo[95309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czcxpsttjvlnakzkhzuwpwakeuopygho ; /usr/bin/python3'
Nov 29 05:09:58 compute-0 sudo[95309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:09:58 compute-0 python3[95313]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:09:58 compute-0 podman[95342]: 2025-11-29 05:09:58.722534003 +0000 UTC m=+0.047560207 container create 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:09:58 compute-0 podman[95340]: 2025-11-29 05:09:58.730920537 +0000 UTC m=+0.059813005 container create 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:09:58 compute-0 systemd[1]: Started libpod-conmon-4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2.scope.
Nov 29 05:09:58 compute-0 systemd[1]: Started libpod-conmon-77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f.scope.
Nov 29 05:09:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7d857d29652ae4cfb6185f34dd9b15ef58766e30800bf2db2a1715e61fb100/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7d857d29652ae4cfb6185f34dd9b15ef58766e30800bf2db2a1715e61fb100/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:58 compute-0 podman[95342]: 2025-11-29 05:09:58.698470998 +0000 UTC m=+0.023497262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:09:58 compute-0 podman[95340]: 2025-11-29 05:09:58.699456443 +0000 UTC m=+0.028348961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:58 compute-0 podman[95342]: 2025-11-29 05:09:58.812724508 +0000 UTC m=+0.137750792 container init 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 05:09:58 compute-0 podman[95340]: 2025-11-29 05:09:58.815494135 +0000 UTC m=+0.144386623 container init 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:09:58 compute-0 podman[95340]: 2025-11-29 05:09:58.82228598 +0000 UTC m=+0.151178448 container start 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:58 compute-0 podman[95342]: 2025-11-29 05:09:58.823703095 +0000 UTC m=+0.148729299 container start 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:09:58 compute-0 lucid_carson[95372]: 167 167
Nov 29 05:09:58 compute-0 podman[95340]: 2025-11-29 05:09:58.826675707 +0000 UTC m=+0.155568285 container attach 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:58 compute-0 systemd[1]: libpod-77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f.scope: Deactivated successfully.
Nov 29 05:09:58 compute-0 podman[95342]: 2025-11-29 05:09:58.831198977 +0000 UTC m=+0.156225231 container attach 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:09:58 compute-0 podman[95340]: 2025-11-29 05:09:58.832854817 +0000 UTC m=+0.161747305 container died 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 05:09:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c92ccc84ced699981d3e7426e5dde5c7fa4a77f8d8efd7c6ca2b14469db0d44-merged.mount: Deactivated successfully.
Nov 29 05:09:58 compute-0 podman[95340]: 2025-11-29 05:09:58.880197939 +0000 UTC m=+0.209090417 container remove 77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_carson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 29 05:09:58 compute-0 systemd[1]: libpod-conmon-77d1b026d657f45f5202a4399b420b9e9363c71e5538b4c2a46b52217f8e3a4f.scope: Deactivated successfully.
Nov 29 05:09:59 compute-0 podman[95401]: 2025-11-29 05:09:59.039668587 +0000 UTC m=+0.043502149 container create 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 05:09:59 compute-0 systemd[1]: Started libpod-conmon-7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac.scope.
Nov 29 05:09:59 compute-0 podman[95401]: 2025-11-29 05:09:59.019294411 +0000 UTC m=+0.023128033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:09:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b840539dddf16ba0271174f8553668ff940b8a4c8c1f79d847001ebb76cbf03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b840539dddf16ba0271174f8553668ff940b8a4c8c1f79d847001ebb76cbf03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b840539dddf16ba0271174f8553668ff940b8a4c8c1f79d847001ebb76cbf03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b840539dddf16ba0271174f8553668ff940b8a4c8c1f79d847001ebb76cbf03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:09:59 compute-0 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 05:09:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:09:59 compute-0 podman[95401]: 2025-11-29 05:09:59.172053867 +0000 UTC m=+0.175887449 container init 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:09:59 compute-0 podman[95401]: 2025-11-29 05:09:59.185119925 +0000 UTC m=+0.188953497 container start 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:09:59 compute-0 podman[95401]: 2025-11-29 05:09:59.187834991 +0000 UTC m=+0.191668563 container attach 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:09:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v53: 7 pgs: 1 creating+peering, 1 unknown, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:09:59 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2552320646' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 05:09:59 compute-0 ceph-mon[75176]: osdmap e23: 3 total, 3 up, 3 in
Nov 29 05:09:59 compute-0 ceph-mon[75176]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 05:09:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 29 05:09:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803824817' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 05:10:00 compute-0 determined_bell[95419]: {
Nov 29 05:10:00 compute-0 determined_bell[95419]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "osd_id": 0,
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "type": "bluestore"
Nov 29 05:10:00 compute-0 determined_bell[95419]:     },
Nov 29 05:10:00 compute-0 determined_bell[95419]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "osd_id": 1,
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "type": "bluestore"
Nov 29 05:10:00 compute-0 determined_bell[95419]:     },
Nov 29 05:10:00 compute-0 determined_bell[95419]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "osd_id": 2,
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:10:00 compute-0 determined_bell[95419]:         "type": "bluestore"
Nov 29 05:10:00 compute-0 determined_bell[95419]:     }
Nov 29 05:10:00 compute-0 determined_bell[95419]: }
Nov 29 05:10:00 compute-0 systemd[1]: libpod-7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac.scope: Deactivated successfully.
Nov 29 05:10:00 compute-0 systemd[1]: libpod-7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac.scope: Consumed 1.099s CPU time.
Nov 29 05:10:00 compute-0 podman[95401]: 2025-11-29 05:10:00.286690437 +0000 UTC m=+1.290524009 container died 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 29 05:10:00 compute-0 ceph-mon[75176]: pgmap v53: 7 pgs: 1 creating+peering, 1 unknown, 5 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:00 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/803824817' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 05:10:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/803824817' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 05:10:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 29 05:10:00 compute-0 jovial_herschel[95370]: enabled application 'rbd' on pool 'volumes'
Nov 29 05:10:00 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 29 05:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b840539dddf16ba0271174f8553668ff940b8a4c8c1f79d847001ebb76cbf03-merged.mount: Deactivated successfully.
Nov 29 05:10:00 compute-0 systemd[1]: libpod-4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2.scope: Deactivated successfully.
Nov 29 05:10:00 compute-0 podman[95342]: 2025-11-29 05:10:00.343693424 +0000 UTC m=+1.668719668 container died 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 05:10:00 compute-0 podman[95401]: 2025-11-29 05:10:00.359560079 +0000 UTC m=+1.363393651 container remove 7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_bell, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:10:00 compute-0 systemd[1]: libpod-conmon-7d13cf697c7c51ce3b3ff2e62378eaee3cafb9737c885267b2a1b04e96ac21ac.scope: Deactivated successfully.
Nov 29 05:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b7d857d29652ae4cfb6185f34dd9b15ef58766e30800bf2db2a1715e61fb100-merged.mount: Deactivated successfully.
Nov 29 05:10:00 compute-0 podman[95342]: 2025-11-29 05:10:00.39368212 +0000 UTC m=+1.718708324 container remove 4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2 (image=quay.io/ceph/ceph:v18, name=jovial_herschel, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:10:00 compute-0 sudo[95237]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:00 compute-0 systemd[1]: libpod-conmon-4afa2a9d7e90d5621350aa38d62217a1df49e28c42cc5661b4fc8f42898d90d2.scope: Deactivated successfully.
Nov 29 05:10:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:00 compute-0 sudo[95309]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:00 compute-0 sudo[95499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:00 compute-0 sudo[95499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:00 compute-0 sudo[95499]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:00 compute-0 sudo[95524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:10:00 compute-0 sudo[95524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:00 compute-0 sudo[95524]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:00 compute-0 sudo[95570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzwbgmieirikkyfbmkvpqswqehebumyb ; /usr/bin/python3'
Nov 29 05:10:00 compute-0 sudo[95570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:00 compute-0 sudo[95575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:00 compute-0 sudo[95575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:00 compute-0 sudo[95575]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:00 compute-0 python3[95574]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:00 compute-0 sudo[95600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:00 compute-0 sudo[95600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:00 compute-0 sudo[95600]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:00 compute-0 podman[95623]: 2025-11-29 05:10:00.752113928 +0000 UTC m=+0.054562509 container create 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:00 compute-0 sudo[95635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:00 compute-0 systemd[1]: Started libpod-conmon-4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7.scope.
Nov 29 05:10:00 compute-0 sudo[95635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:00 compute-0 sudo[95635]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:00 compute-0 podman[95623]: 2025-11-29 05:10:00.730278746 +0000 UTC m=+0.032727347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1142e63b434f75cf825904ff12c3da5bd101855bbb254ddb7202458bac744b0a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1142e63b434f75cf825904ff12c3da5bd101855bbb254ddb7202458bac744b0a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:00 compute-0 podman[95623]: 2025-11-29 05:10:00.85294334 +0000 UTC m=+0.155392011 container init 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 05:10:00 compute-0 podman[95623]: 2025-11-29 05:10:00.86160518 +0000 UTC m=+0.164053771 container start 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 29 05:10:00 compute-0 podman[95623]: 2025-11-29 05:10:00.865040054 +0000 UTC m=+0.167488645 container attach 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 29 05:10:00 compute-0 sudo[95667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:10:00 compute-0 sudo[95667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:01 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/803824817' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 05:10:01 compute-0 ceph-mon[75176]: osdmap e24: 3 total, 3 up, 3 in
Nov 29 05:10:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:01 compute-0 podman[95784]: 2025-11-29 05:10:01.363317473 +0000 UTC m=+0.063777032 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:10:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 29 05:10:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1633300083' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 05:10:01 compute-0 podman[95784]: 2025-11-29 05:10:01.476066076 +0000 UTC m=+0.176525575 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:01 compute-0 sudo[95667]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:02 compute-0 sudo[95909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:02 compute-0 sudo[95909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:02 compute-0 sudo[95909]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:02 compute-0 sudo[95934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:02 compute-0 sudo[95934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:02 compute-0 sudo[95934]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:02 compute-0 sudo[95959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:02 compute-0 sudo[95959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:02 compute-0 sudo[95959]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:02 compute-0 sudo[95984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:10:02 compute-0 sudo[95984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 29 05:10:02 compute-0 ceph-mon[75176]: pgmap v55: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:02 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1633300083' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 05:10:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1633300083' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 05:10:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 29 05:10:02 compute-0 sharp_hypatia[95663]: enabled application 'rbd' on pool 'backups'
Nov 29 05:10:02 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 29 05:10:02 compute-0 systemd[1]: libpod-4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7.scope: Deactivated successfully.
Nov 29 05:10:02 compute-0 podman[95623]: 2025-11-29 05:10:02.355080995 +0000 UTC m=+1.657529586 container died 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 05:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1142e63b434f75cf825904ff12c3da5bd101855bbb254ddb7202458bac744b0a-merged.mount: Deactivated successfully.
Nov 29 05:10:02 compute-0 podman[95623]: 2025-11-29 05:10:02.395547 +0000 UTC m=+1.697995571 container remove 4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7 (image=quay.io/ceph/ceph:v18, name=sharp_hypatia, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:02 compute-0 systemd[1]: libpod-conmon-4d753d9a0ba4db09e831f076f8e78c7a800e506a1297ba094f72cf8dace082c7.scope: Deactivated successfully.
Nov 29 05:10:02 compute-0 sudo[95570]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:02 compute-0 sudo[96061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yutqchxermypgntvbuaozgywghoevcat ; /usr/bin/python3'
Nov 29 05:10:02 compute-0 sudo[96061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:02 compute-0 sudo[95984]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:02 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:10:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:10:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:10:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:02 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 7ddb7ed6-e687-41c2-bcff-7d9f67453acc does not exist
Nov 29 05:10:02 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 6feaff37-d51b-4844-8d59-219664d05489 does not exist
Nov 29 05:10:02 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 02c5b774-29bf-41d9-ae52-20b9dbe37fad does not exist
Nov 29 05:10:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:10:02 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:10:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:10:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:10:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:02 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:02 compute-0 python3[96063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:02 compute-0 sudo[96076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:02 compute-0 sudo[96076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:02 compute-0 sudo[96076]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:02 compute-0 podman[96081]: 2025-11-29 05:10:02.734813231 +0000 UTC m=+0.036871128 container create bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:02 compute-0 systemd[1]: Started libpod-conmon-bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1.scope.
Nov 29 05:10:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:02 compute-0 sudo[96114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:02 compute-0 sudo[96114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aefc9da802f903bdc738b8b62d6c966408cbbb5e84d36d0aa071330dbdf4196/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aefc9da802f903bdc738b8b62d6c966408cbbb5e84d36d0aa071330dbdf4196/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:02 compute-0 sudo[96114]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:02 compute-0 podman[96081]: 2025-11-29 05:10:02.811478506 +0000 UTC m=+0.113536443 container init bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:10:02 compute-0 podman[96081]: 2025-11-29 05:10:02.71832762 +0000 UTC m=+0.020385537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:02 compute-0 podman[96081]: 2025-11-29 05:10:02.822821492 +0000 UTC m=+0.124879389 container start bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:10:02 compute-0 podman[96081]: 2025-11-29 05:10:02.826631134 +0000 UTC m=+0.128689051 container attach bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:02 compute-0 sudo[96144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:02 compute-0 sudo[96144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:02 compute-0 sudo[96144]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:02 compute-0 sudo[96170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:10:02 compute-0 sudo[96170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:03 compute-0 podman[96255]: 2025-11-29 05:10:03.205076939 +0000 UTC m=+0.037826041 container create 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:10:03 compute-0 systemd[1]: Started libpod-conmon-4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60.scope.
Nov 29 05:10:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:03 compute-0 podman[96255]: 2025-11-29 05:10:03.282298727 +0000 UTC m=+0.115047939 container init 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:10:03 compute-0 podman[96255]: 2025-11-29 05:10:03.18745381 +0000 UTC m=+0.020202982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:03 compute-0 podman[96255]: 2025-11-29 05:10:03.287311478 +0000 UTC m=+0.120060580 container start 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 05:10:03 compute-0 podman[96255]: 2025-11-29 05:10:03.291609874 +0000 UTC m=+0.124359006 container attach 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 05:10:03 compute-0 pedantic_knuth[96272]: 167 167
Nov 29 05:10:03 compute-0 systemd[1]: libpod-4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60.scope: Deactivated successfully.
Nov 29 05:10:03 compute-0 podman[96255]: 2025-11-29 05:10:03.294218686 +0000 UTC m=+0.126967788 container died 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:10:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6b4bff7481e90a9bf674ee8a10c1207e13494f3ef9be803d5d41690bf409584-merged.mount: Deactivated successfully.
Nov 29 05:10:03 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1633300083' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 05:10:03 compute-0 ceph-mon[75176]: osdmap e25: 3 total, 3 up, 3 in
Nov 29 05:10:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:10:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:10:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:10:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 29 05:10:03 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1314617468' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 05:10:03 compute-0 podman[96255]: 2025-11-29 05:10:03.348302952 +0000 UTC m=+0.181052044 container remove 4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_knuth, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:10:03 compute-0 systemd[1]: libpod-conmon-4017296ef446465cef38470d5b94018ceae89c19e213c961dd9c9aa0d698ff60.scope: Deactivated successfully.
Nov 29 05:10:03 compute-0 podman[96297]: 2025-11-29 05:10:03.531491728 +0000 UTC m=+0.042879455 container create eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:03 compute-0 systemd[1]: Started libpod-conmon-eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f.scope.
Nov 29 05:10:03 compute-0 podman[96297]: 2025-11-29 05:10:03.509146614 +0000 UTC m=+0.020534321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:03 compute-0 podman[96297]: 2025-11-29 05:10:03.624948971 +0000 UTC m=+0.136336698 container init eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 05:10:03 compute-0 podman[96297]: 2025-11-29 05:10:03.632078924 +0000 UTC m=+0.143466641 container start eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:03 compute-0 podman[96297]: 2025-11-29 05:10:03.635841475 +0000 UTC m=+0.147229202 container attach eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:10:04 compute-0 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 05:10:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 29 05:10:04 compute-0 ceph-mon[75176]: pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:04 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1314617468' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 05:10:04 compute-0 ceph-mon[75176]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 05:10:04 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1314617468' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 05:10:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 29 05:10:04 compute-0 loving_keller[96139]: enabled application 'rbd' on pool 'images'
Nov 29 05:10:04 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 29 05:10:04 compute-0 systemd[1]: libpod-bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1.scope: Deactivated successfully.
Nov 29 05:10:04 compute-0 podman[96081]: 2025-11-29 05:10:04.380794644 +0000 UTC m=+1.682852541 container died bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aefc9da802f903bdc738b8b62d6c966408cbbb5e84d36d0aa071330dbdf4196-merged.mount: Deactivated successfully.
Nov 29 05:10:04 compute-0 podman[96081]: 2025-11-29 05:10:04.421684429 +0000 UTC m=+1.723742326 container remove bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1 (image=quay.io/ceph/ceph:v18, name=loving_keller, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 05:10:04 compute-0 systemd[1]: libpod-conmon-bfb9a26079d4179478dd443e2eb7998c0d4041fb7ffbd07e9bcbd09749bf36b1.scope: Deactivated successfully.
Nov 29 05:10:04 compute-0 sudo[96061]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:04 compute-0 sudo[96376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axusdxesswanrlidrvpyywgjwcalbtgv ; /usr/bin/python3'
Nov 29 05:10:04 compute-0 sudo[96376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:04 compute-0 hopeful_lamarr[96315]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:10:04 compute-0 hopeful_lamarr[96315]: --> relative data size: 1.0
Nov 29 05:10:04 compute-0 hopeful_lamarr[96315]: --> All data devices are unavailable
Nov 29 05:10:04 compute-0 systemd[1]: libpod-eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f.scope: Deactivated successfully.
Nov 29 05:10:04 compute-0 podman[96297]: 2025-11-29 05:10:04.715373022 +0000 UTC m=+1.226760819 container died eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:04 compute-0 python3[96378]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-11ae050190cbf6d095e9db67c16da14057fef6e8b0a583f02228eab77c311871-merged.mount: Deactivated successfully.
Nov 29 05:10:04 compute-0 podman[96297]: 2025-11-29 05:10:04.779130223 +0000 UTC m=+1.290517920 container remove eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:04 compute-0 systemd[1]: libpod-conmon-eef913656a04a773bbff8bf6699e003181e4ae6d6b9cb3076c5c27740aeab40f.scope: Deactivated successfully.
Nov 29 05:10:04 compute-0 podman[96395]: 2025-11-29 05:10:04.806697844 +0000 UTC m=+0.044338140 container create 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 05:10:04 compute-0 sudo[96170]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:04 compute-0 systemd[1]: Started libpod-conmon-111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0.scope.
Nov 29 05:10:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:04 compute-0 sudo[96407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:04 compute-0 sudo[96407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe45f81ee406cab442dfb89f68473e467e97349a517d0a9efa2b8fb03dcbd8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fe45f81ee406cab442dfb89f68473e467e97349a517d0a9efa2b8fb03dcbd8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:04 compute-0 sudo[96407]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:04 compute-0 podman[96395]: 2025-11-29 05:10:04.789015733 +0000 UTC m=+0.026656019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:04 compute-0 podman[96395]: 2025-11-29 05:10:04.89992015 +0000 UTC m=+0.137560516 container init 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 05:10:04 compute-0 podman[96395]: 2025-11-29 05:10:04.910197071 +0000 UTC m=+0.147837357 container start 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:04 compute-0 podman[96395]: 2025-11-29 05:10:04.916038903 +0000 UTC m=+0.153679229 container attach 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:10:04 compute-0 sudo[96439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:04 compute-0 sudo[96439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:04 compute-0 sudo[96439]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:05 compute-0 sudo[96465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:05 compute-0 sudo[96465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:05 compute-0 sudo[96465]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:05 compute-0 sudo[96490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:10:05 compute-0 sudo[96490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:05 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1314617468' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 05:10:05 compute-0 ceph-mon[75176]: osdmap e26: 3 total, 3 up, 3 in
Nov 29 05:10:05 compute-0 podman[96573]: 2025-11-29 05:10:05.416678119 +0000 UTC m=+0.064115230 container create 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:10:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 29 05:10:05 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1049034047' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 05:10:05 compute-0 systemd[1]: Started libpod-conmon-91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848.scope.
Nov 29 05:10:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:05 compute-0 podman[96573]: 2025-11-29 05:10:05.386037044 +0000 UTC m=+0.033474205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:05 compute-0 podman[96573]: 2025-11-29 05:10:05.490615497 +0000 UTC m=+0.138052598 container init 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:05 compute-0 podman[96573]: 2025-11-29 05:10:05.495983718 +0000 UTC m=+0.143420799 container start 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:05 compute-0 podman[96573]: 2025-11-29 05:10:05.50017485 +0000 UTC m=+0.147611921 container attach 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:10:05 compute-0 zen_brahmagupta[96591]: 167 167
Nov 29 05:10:05 compute-0 systemd[1]: libpod-91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848.scope: Deactivated successfully.
Nov 29 05:10:05 compute-0 podman[96573]: 2025-11-29 05:10:05.50262965 +0000 UTC m=+0.150066731 container died 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0ed6a263fc44cccb35260a11430343ccc1d452f3e64efd59511ee678340cd4b-merged.mount: Deactivated successfully.
Nov 29 05:10:05 compute-0 podman[96573]: 2025-11-29 05:10:05.541809003 +0000 UTC m=+0.189246084 container remove 91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 05:10:05 compute-0 systemd[1]: libpod-conmon-91a256ccd26ad511183d01f2dff5296b6711551842f9684bf6c8140ddcc3e848.scope: Deactivated successfully.
Nov 29 05:10:05 compute-0 podman[96614]: 2025-11-29 05:10:05.671491137 +0000 UTC m=+0.036888038 container create b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 05:10:05 compute-0 systemd[1]: Started libpod-conmon-b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede.scope.
Nov 29 05:10:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98516b76b9deeacec000ae6f616b1874eaea3d017d4c6c297a34e97fb3ff99fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98516b76b9deeacec000ae6f616b1874eaea3d017d4c6c297a34e97fb3ff99fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98516b76b9deeacec000ae6f616b1874eaea3d017d4c6c297a34e97fb3ff99fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98516b76b9deeacec000ae6f616b1874eaea3d017d4c6c297a34e97fb3ff99fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:05 compute-0 podman[96614]: 2025-11-29 05:10:05.654003152 +0000 UTC m=+0.019400073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:05 compute-0 podman[96614]: 2025-11-29 05:10:05.757577641 +0000 UTC m=+0.122974582 container init b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:10:05 compute-0 podman[96614]: 2025-11-29 05:10:05.767449441 +0000 UTC m=+0.132846332 container start b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:10:05 compute-0 podman[96614]: 2025-11-29 05:10:05.771002257 +0000 UTC m=+0.136399238 container attach b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:10:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 29 05:10:06 compute-0 ceph-mon[75176]: pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:06 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1049034047' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 05:10:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1049034047' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 05:10:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 29 05:10:06 compute-0 gallant_wright[96434]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 29 05:10:06 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 29 05:10:06 compute-0 systemd[1]: libpod-111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0.scope: Deactivated successfully.
Nov 29 05:10:06 compute-0 podman[96395]: 2025-11-29 05:10:06.415936783 +0000 UTC m=+1.653577059 container died 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fe45f81ee406cab442dfb89f68473e467e97349a517d0a9efa2b8fb03dcbd8e-merged.mount: Deactivated successfully.
Nov 29 05:10:06 compute-0 podman[96395]: 2025-11-29 05:10:06.468640966 +0000 UTC m=+1.706281242 container remove 111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0 (image=quay.io/ceph/ceph:v18, name=gallant_wright, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:10:06 compute-0 systemd[1]: libpod-conmon-111871a67f386475d8f7513c81e923bf3323777df8dd25f43d48ecd9e35a8ed0.scope: Deactivated successfully.
Nov 29 05:10:06 compute-0 sudo[96376]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:06 compute-0 brave_bardeen[96630]: {
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:     "0": [
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:         {
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "devices": [
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "/dev/loop3"
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             ],
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_name": "ceph_lv0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_size": "21470642176",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "name": "ceph_lv0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "tags": {
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.crush_device_class": "",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.encrypted": "0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.osd_id": "0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.type": "block",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.vdo": "0"
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             },
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "type": "block",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "vg_name": "ceph_vg0"
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:         }
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:     ],
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:     "1": [
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:         {
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "devices": [
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "/dev/loop4"
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             ],
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_name": "ceph_lv1",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_size": "21470642176",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "name": "ceph_lv1",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "tags": {
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.crush_device_class": "",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.encrypted": "0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.osd_id": "1",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.type": "block",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.vdo": "0"
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             },
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "type": "block",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "vg_name": "ceph_vg1"
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:         }
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:     ],
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:     "2": [
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:         {
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "devices": [
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "/dev/loop5"
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             ],
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_name": "ceph_lv2",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_size": "21470642176",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "name": "ceph_lv2",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "tags": {
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.crush_device_class": "",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.encrypted": "0",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.osd_id": "2",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.type": "block",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:                 "ceph.vdo": "0"
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             },
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "type": "block",
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:             "vg_name": "ceph_vg2"
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:         }
Nov 29 05:10:06 compute-0 brave_bardeen[96630]:     ]
Nov 29 05:10:06 compute-0 brave_bardeen[96630]: }
Nov 29 05:10:06 compute-0 systemd[1]: libpod-b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede.scope: Deactivated successfully.
Nov 29 05:10:06 compute-0 podman[96655]: 2025-11-29 05:10:06.62257991 +0000 UTC m=+0.026337122 container died b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:06 compute-0 sudo[96687]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqytxfitjqiiyamzsrlzanaajdefkylv ; /usr/bin/python3'
Nov 29 05:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-98516b76b9deeacec000ae6f616b1874eaea3d017d4c6c297a34e97fb3ff99fe-merged.mount: Deactivated successfully.
Nov 29 05:10:06 compute-0 sudo[96687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:06 compute-0 podman[96655]: 2025-11-29 05:10:06.706484621 +0000 UTC m=+0.110241803 container remove b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:10:06 compute-0 systemd[1]: libpod-conmon-b05727b154e97d5a20fe12361617e85ef9928dd4f27651c952413789b7517ede.scope: Deactivated successfully.
Nov 29 05:10:06 compute-0 sudo[96490]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:06 compute-0 sudo[96693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:06 compute-0 sudo[96693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:06 compute-0 python3[96692]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:06 compute-0 sudo[96693]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:06 compute-0 sudo[96719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:06 compute-0 sudo[96719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:06 compute-0 podman[96718]: 2025-11-29 05:10:06.878818522 +0000 UTC m=+0.048809188 container create 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:10:06 compute-0 sudo[96719]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:06 compute-0 systemd[1]: Started libpod-conmon-27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086.scope.
Nov 29 05:10:06 compute-0 podman[96718]: 2025-11-29 05:10:06.850006011 +0000 UTC m=+0.019996647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:06 compute-0 sudo[96756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:06 compute-0 sudo[96756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f7cefc58315380a9f7a0e002759f3e7cbabf1d77e4e2ad770363f535fc6628/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41f7cefc58315380a9f7a0e002759f3e7cbabf1d77e4e2ad770363f535fc6628/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:06 compute-0 sudo[96756]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:06 compute-0 podman[96718]: 2025-11-29 05:10:06.973040023 +0000 UTC m=+0.143030759 container init 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:06 compute-0 podman[96718]: 2025-11-29 05:10:06.983388075 +0000 UTC m=+0.153378701 container start 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:06 compute-0 podman[96718]: 2025-11-29 05:10:06.98729996 +0000 UTC m=+0.157290616 container attach 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:10:07 compute-0 sudo[96787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:10:07 compute-0 sudo[96787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:07 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1049034047' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 05:10:07 compute-0 ceph-mon[75176]: osdmap e27: 3 total, 3 up, 3 in
Nov 29 05:10:07 compute-0 podman[96872]: 2025-11-29 05:10:07.462517178 +0000 UTC m=+0.057716695 container create 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:07 compute-0 systemd[1]: Started libpod-conmon-9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2.scope.
Nov 29 05:10:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 29 05:10:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/142457142' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 05:10:07 compute-0 podman[96872]: 2025-11-29 05:10:07.433664337 +0000 UTC m=+0.028863854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:07 compute-0 podman[96872]: 2025-11-29 05:10:07.546244434 +0000 UTC m=+0.141444041 container init 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:10:07 compute-0 podman[96872]: 2025-11-29 05:10:07.55670805 +0000 UTC m=+0.151907607 container start 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:07 compute-0 sad_greider[96889]: 167 167
Nov 29 05:10:07 compute-0 systemd[1]: libpod-9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2.scope: Deactivated successfully.
Nov 29 05:10:07 compute-0 podman[96872]: 2025-11-29 05:10:07.561883435 +0000 UTC m=+0.157083042 container attach 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:10:07 compute-0 podman[96872]: 2025-11-29 05:10:07.563238688 +0000 UTC m=+0.158438245 container died 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6d90e32844807701a095ef9f73b57c568453fe7533a44d35d06de6527c84390-merged.mount: Deactivated successfully.
Nov 29 05:10:07 compute-0 podman[96872]: 2025-11-29 05:10:07.611123913 +0000 UTC m=+0.206323460 container remove 9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_greider, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:07 compute-0 systemd[1]: libpod-conmon-9e1743b585746c27544d369952f2a88cc8d822e16f029d62a452f0a68760b2a2.scope: Deactivated successfully.
Nov 29 05:10:07 compute-0 podman[96913]: 2025-11-29 05:10:07.837402366 +0000 UTC m=+0.061556098 container create 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:07 compute-0 systemd[1]: Started libpod-conmon-7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15.scope.
Nov 29 05:10:07 compute-0 podman[96913]: 2025-11-29 05:10:07.805294875 +0000 UTC m=+0.029448657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae0a4ac8bb4e807d800698d33951bd419a8108077b9d5d23c8055889e05fe8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae0a4ac8bb4e807d800698d33951bd419a8108077b9d5d23c8055889e05fe8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae0a4ac8bb4e807d800698d33951bd419a8108077b9d5d23c8055889e05fe8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ae0a4ac8bb4e807d800698d33951bd419a8108077b9d5d23c8055889e05fe8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:07 compute-0 podman[96913]: 2025-11-29 05:10:07.934299883 +0000 UTC m=+0.158453625 container init 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 05:10:07 compute-0 podman[96913]: 2025-11-29 05:10:07.946345166 +0000 UTC m=+0.170498868 container start 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 05:10:07 compute-0 podman[96913]: 2025-11-29 05:10:07.949539813 +0000 UTC m=+0.173693555 container attach 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 05:10:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 29 05:10:08 compute-0 ceph-mon[75176]: pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:08 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/142457142' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 05:10:08 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/142457142' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 05:10:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 29 05:10:08 compute-0 awesome_kirch[96782]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 29 05:10:08 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 29 05:10:08 compute-0 systemd[1]: libpod-27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086.scope: Deactivated successfully.
Nov 29 05:10:08 compute-0 podman[96718]: 2025-11-29 05:10:08.450669012 +0000 UTC m=+1.620659668 container died 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-41f7cefc58315380a9f7a0e002759f3e7cbabf1d77e4e2ad770363f535fc6628-merged.mount: Deactivated successfully.
Nov 29 05:10:08 compute-0 podman[96718]: 2025-11-29 05:10:08.514917625 +0000 UTC m=+1.684908291 container remove 27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086 (image=quay.io/ceph/ceph:v18, name=awesome_kirch, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:08 compute-0 systemd[1]: libpod-conmon-27b5444c72b3e76cecafcae1e3898426dad031b23f6aa6e57e31029867eba086.scope: Deactivated successfully.
Nov 29 05:10:08 compute-0 sudo[96687]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]: {
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "osd_id": 0,
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "type": "bluestore"
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:     },
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "osd_id": 1,
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "type": "bluestore"
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:     },
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "osd_id": 2,
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:         "type": "bluestore"
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]:     }
Nov 29 05:10:08 compute-0 epic_brahmagupta[96929]: }
Nov 29 05:10:09 compute-0 systemd[1]: libpod-7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15.scope: Deactivated successfully.
Nov 29 05:10:09 compute-0 podman[96913]: 2025-11-29 05:10:09.032136895 +0000 UTC m=+1.256290647 container died 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:09 compute-0 systemd[1]: libpod-7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15.scope: Consumed 1.089s CPU time.
Nov 29 05:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4ae0a4ac8bb4e807d800698d33951bd419a8108077b9d5d23c8055889e05fe8-merged.mount: Deactivated successfully.
Nov 29 05:10:09 compute-0 podman[96913]: 2025-11-29 05:10:09.10386829 +0000 UTC m=+1.328022022 container remove 7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:09 compute-0 systemd[1]: libpod-conmon-7471aeb4ee682e2a13f834af56b61ad5d53ed15a74223f84ee883829fdca6a15.scope: Deactivated successfully.
Nov 29 05:10:09 compute-0 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 05:10:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:09 compute-0 sudo[96787]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:09 compute-0 sudo[96988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:09 compute-0 sudo[96988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:09 compute-0 sudo[96988]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:09 compute-0 sudo[97028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:10:09 compute-0 sudo[97028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:09 compute-0 sudo[97028]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:09 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/142457142' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 05:10:09 compute-0 ceph-mon[75176]: osdmap e28: 3 total, 3 up, 3 in
Nov 29 05:10:09 compute-0 ceph-mon[75176]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 05:10:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:09 compute-0 python3[97113]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:10:10 compute-0 python3[97184]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393009.3111906-36560-92396070657346/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:10:10 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 05:10:10 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 05:10:10 compute-0 ceph-mon[75176]: pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:10 compute-0 ceph-mon[75176]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 05:10:10 compute-0 ceph-mon[75176]: Cluster is now healthy
Nov 29 05:10:10 compute-0 sudo[97284]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btohnvyyrzupwwtuqpfyjzxqeuuoxksi ; /usr/bin/python3'
Nov 29 05:10:10 compute-0 sudo[97284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:10 compute-0 python3[97286]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:10:10 compute-0 sudo[97284]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:11 compute-0 sudo[97359]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuelwqolibmbxeopdxiybbbbogyfjfyo ; /usr/bin/python3'
Nov 29 05:10:11 compute-0 sudo[97359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:11 compute-0 python3[97361]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393010.5569205-36574-1604101689895/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=4f2b0ec0c0a878c4af2a9002dc161de66516d501 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:10:11 compute-0 sudo[97359]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:10:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:10:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:10:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:10:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:10:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:10:11 compute-0 sudo[97409]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eikyrxgpyhsxiomxiahswktlutcqxfad ; /usr/bin/python3'
Nov 29 05:10:11 compute-0 sudo[97409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:11 compute-0 python3[97411]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:11 compute-0 podman[97412]: 2025-11-29 05:10:11.754102259 +0000 UTC m=+0.043410397 container create ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:11 compute-0 systemd[1]: Started libpod-conmon-ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a.scope.
Nov 29 05:10:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:11 compute-0 podman[97412]: 2025-11-29 05:10:11.732192616 +0000 UTC m=+0.021500784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4fecc07d19cc07e6824eb36807287c36b1d109f28b9b5fe7e456355f645b9d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4fecc07d19cc07e6824eb36807287c36b1d109f28b9b5fe7e456355f645b9d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c4fecc07d19cc07e6824eb36807287c36b1d109f28b9b5fe7e456355f645b9d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:11 compute-0 podman[97412]: 2025-11-29 05:10:11.849874068 +0000 UTC m=+0.139182276 container init ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:11 compute-0 podman[97412]: 2025-11-29 05:10:11.861188443 +0000 UTC m=+0.150496571 container start ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:11 compute-0 podman[97412]: 2025-11-29 05:10:11.865317484 +0000 UTC m=+0.154625702 container attach ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 05:10:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 05:10:12 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1127288031' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 05:10:12 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1127288031' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 05:10:12 compute-0 exciting_allen[97428]: 
Nov 29 05:10:12 compute-0 exciting_allen[97428]: [global]
Nov 29 05:10:12 compute-0 exciting_allen[97428]:         fsid = 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:10:12 compute-0 exciting_allen[97428]:         mon_host = 192.168.122.100
Nov 29 05:10:12 compute-0 systemd[1]: libpod-ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a.scope: Deactivated successfully.
Nov 29 05:10:12 compute-0 podman[97412]: 2025-11-29 05:10:12.432289374 +0000 UTC m=+0.721597522 container died ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:12 compute-0 ceph-mon[75176]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:12 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1127288031' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 05:10:12 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1127288031' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 05:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c4fecc07d19cc07e6824eb36807287c36b1d109f28b9b5fe7e456355f645b9d-merged.mount: Deactivated successfully.
Nov 29 05:10:12 compute-0 podman[97412]: 2025-11-29 05:10:12.478843856 +0000 UTC m=+0.768151994 container remove ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a (image=quay.io/ceph/ceph:v18, name=exciting_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:12 compute-0 systemd[1]: libpod-conmon-ef611539cd1b0d18b8c941cd4168d74ee22031960bcd4699f600af9680ae037a.scope: Deactivated successfully.
Nov 29 05:10:12 compute-0 sudo[97453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:12 compute-0 sudo[97453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:12 compute-0 sudo[97409]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:12 compute-0 sudo[97453]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:12 compute-0 sudo[97490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:12 compute-0 sudo[97490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:12 compute-0 sudo[97490]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:12 compute-0 sudo[97515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:12 compute-0 sudo[97515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:12 compute-0 sudo[97515]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:12 compute-0 sudo[97562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awfcwfpdbcchzvemhmdoxdmxrzuvvtmc ; /usr/bin/python3'
Nov 29 05:10:12 compute-0 sudo[97562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:12 compute-0 sudo[97564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:10:12 compute-0 sudo[97564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:12 compute-0 python3[97568]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:12 compute-0 podman[97591]: 2025-11-29 05:10:12.837511919 +0000 UTC m=+0.067940273 container create 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 05:10:12 compute-0 systemd[1]: Started libpod-conmon-8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145.scope.
Nov 29 05:10:12 compute-0 podman[97591]: 2025-11-29 05:10:12.810048922 +0000 UTC m=+0.040477376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b0052d47c007df8996bbbd12b2d542ab7fe61192973fb3366b2d75728b009b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b0052d47c007df8996bbbd12b2d542ab7fe61192973fb3366b2d75728b009b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b0052d47c007df8996bbbd12b2d542ab7fe61192973fb3366b2d75728b009b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:12 compute-0 podman[97591]: 2025-11-29 05:10:12.928163574 +0000 UTC m=+0.158591948 container init 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:12 compute-0 podman[97591]: 2025-11-29 05:10:12.934317235 +0000 UTC m=+0.164745599 container start 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:12 compute-0 podman[97591]: 2025-11-29 05:10:12.937485281 +0000 UTC m=+0.167913635 container attach 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:13 compute-0 podman[97682]: 2025-11-29 05:10:13.174209569 +0000 UTC m=+0.046375689 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:13 compute-0 podman[97682]: 2025-11-29 05:10:13.282645066 +0000 UTC m=+0.154811166 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:10:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 29 05:10:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3496001098' entity='client.admin' 
Nov 29 05:10:13 compute-0 intelligent_volhard[97632]: set ssl_option
Nov 29 05:10:13 compute-0 systemd[1]: libpod-8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145.scope: Deactivated successfully.
Nov 29 05:10:13 compute-0 podman[97591]: 2025-11-29 05:10:13.553172206 +0000 UTC m=+0.783600560 container died 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8b0052d47c007df8996bbbd12b2d542ab7fe61192973fb3366b2d75728b009b-merged.mount: Deactivated successfully.
Nov 29 05:10:13 compute-0 podman[97591]: 2025-11-29 05:10:13.597591206 +0000 UTC m=+0.828019580 container remove 8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145 (image=quay.io/ceph/ceph:v18, name=intelligent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:10:13 compute-0 systemd[1]: libpod-conmon-8b967d165a86ed51363af76db276d483e38551d2998c366ecf68486dfd2df145.scope: Deactivated successfully.
Nov 29 05:10:13 compute-0 sudo[97562]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:13 compute-0 sudo[97564]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:13 compute-0 sudo[97857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhacgaythujtwkjggshxjmskgbbbrbaj ; /usr/bin/python3'
Nov 29 05:10:13 compute-0 sudo[97857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:10:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:10:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:10:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:13 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 975121d3-e9ce-4516-b873-5b48dcdd0d7d does not exist
Nov 29 05:10:13 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev dfa1c105-c45b-43d3-b665-77b93fbcac6e does not exist
Nov 29 05:10:13 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev fac06290-7813-42f2-88bc-1cc6f4faef07 does not exist
Nov 29 05:10:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:10:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:10:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:10:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:10:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:13 compute-0 sudo[97860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:13 compute-0 sudo[97860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:13 compute-0 sudo[97860]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:13 compute-0 python3[97859]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:13 compute-0 sudo[97885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:13 compute-0 sudo[97885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:13 compute-0 sudo[97885]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:13 compute-0 podman[97908]: 2025-11-29 05:10:13.951716099 +0000 UTC m=+0.042865343 container create 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:13 compute-0 sudo[97916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:13 compute-0 sudo[97916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:13 compute-0 sudo[97916]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:13 compute-0 systemd[1]: Started libpod-conmon-0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b.scope.
Nov 29 05:10:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:14 compute-0 sudo[97948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:10:14 compute-0 sudo[97948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afa1ae8130f9c3046176b9c6a27f013971bda179209a7edc0caa59b84f21739/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afa1ae8130f9c3046176b9c6a27f013971bda179209a7edc0caa59b84f21739/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8afa1ae8130f9c3046176b9c6a27f013971bda179209a7edc0caa59b84f21739/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:14 compute-0 podman[97908]: 2025-11-29 05:10:13.934326587 +0000 UTC m=+0.025475851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:14 compute-0 podman[97908]: 2025-11-29 05:10:14.040737875 +0000 UTC m=+0.131887169 container init 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:10:14 compute-0 podman[97908]: 2025-11-29 05:10:14.048739789 +0000 UTC m=+0.139889033 container start 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:14 compute-0 podman[97908]: 2025-11-29 05:10:14.052324466 +0000 UTC m=+0.143473720 container attach 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:14 compute-0 podman[98036]: 2025-11-29 05:10:14.465963686 +0000 UTC m=+0.059326293 container create 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:14 compute-0 systemd[1]: Started libpod-conmon-118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c.scope.
Nov 29 05:10:14 compute-0 podman[98036]: 2025-11-29 05:10:14.435063095 +0000 UTC m=+0.028425732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:14 compute-0 ceph-mon[75176]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3496001098' entity='client.admin' 
Nov 29 05:10:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:10:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:10:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:10:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:14 compute-0 podman[98036]: 2025-11-29 05:10:14.553414444 +0000 UTC m=+0.146777081 container init 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:14 compute-0 podman[98036]: 2025-11-29 05:10:14.565860707 +0000 UTC m=+0.159223284 container start 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:14 compute-0 epic_mestorf[98053]: 167 167
Nov 29 05:10:14 compute-0 systemd[1]: libpod-118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c.scope: Deactivated successfully.
Nov 29 05:10:14 compute-0 podman[98036]: 2025-11-29 05:10:14.571030702 +0000 UTC m=+0.164393299 container attach 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:14 compute-0 podman[98036]: 2025-11-29 05:10:14.571622777 +0000 UTC m=+0.164985344 container died 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-36ebc5c31970d9ac6bb3a815bf02e2cecdb48a400fa8a812e298dc428491e5bb-merged.mount: Deactivated successfully.
Nov 29 05:10:14 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 29 05:10:14 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 29 05:10:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 05:10:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:14 compute-0 vigilant_goldstine[97968]: Scheduled rgw.rgw update...
Nov 29 05:10:14 compute-0 podman[98036]: 2025-11-29 05:10:14.608115104 +0000 UTC m=+0.201477681 container remove 118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:10:14 compute-0 systemd[1]: libpod-0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b.scope: Deactivated successfully.
Nov 29 05:10:14 compute-0 podman[97908]: 2025-11-29 05:10:14.623420987 +0000 UTC m=+0.714570231 container died 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:14 compute-0 systemd[1]: libpod-conmon-118c725801fd8800e9a15f7ff51a6fa52075111a280a77d96a015da14855c47c.scope: Deactivated successfully.
Nov 29 05:10:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-8afa1ae8130f9c3046176b9c6a27f013971bda179209a7edc0caa59b84f21739-merged.mount: Deactivated successfully.
Nov 29 05:10:14 compute-0 podman[97908]: 2025-11-29 05:10:14.661518183 +0000 UTC m=+0.752667427 container remove 0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b (image=quay.io/ceph/ceph:v18, name=vigilant_goldstine, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:10:14 compute-0 systemd[1]: libpod-conmon-0ba9ae02f3c18d847e46c27a27fa40a31b7209a09b2b31cbcd76087bace2378b.scope: Deactivated successfully.
Nov 29 05:10:14 compute-0 sudo[97857]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:14 compute-0 podman[98092]: 2025-11-29 05:10:14.747556256 +0000 UTC m=+0.033954337 container create 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:14 compute-0 systemd[1]: Started libpod-conmon-8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e.scope.
Nov 29 05:10:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:14 compute-0 podman[98092]: 2025-11-29 05:10:14.815371165 +0000 UTC m=+0.101769246 container init 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:10:14 compute-0 podman[98092]: 2025-11-29 05:10:14.824485196 +0000 UTC m=+0.110883277 container start 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:14 compute-0 podman[98092]: 2025-11-29 05:10:14.827996012 +0000 UTC m=+0.114394123 container attach 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:14 compute-0 podman[98092]: 2025-11-29 05:10:14.732973751 +0000 UTC m=+0.019371842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:15 compute-0 ceph-mon[75176]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:10:15 compute-0 ceph-mon[75176]: Saving service rgw.rgw spec with placement compute-0
Nov 29 05:10:15 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:15 compute-0 python3[98196]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:10:15 compute-0 vibrant_wiles[98108]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:10:15 compute-0 vibrant_wiles[98108]: --> relative data size: 1.0
Nov 29 05:10:15 compute-0 vibrant_wiles[98108]: --> All data devices are unavailable
Nov 29 05:10:15 compute-0 systemd[1]: libpod-8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e.scope: Deactivated successfully.
Nov 29 05:10:15 compute-0 podman[98092]: 2025-11-29 05:10:15.857163314 +0000 UTC m=+1.143561405 container died 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:10:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-93d87d063709b4ce32326f308acfbb7f3c99535ff0b56681ba4c98246e733508-merged.mount: Deactivated successfully.
Nov 29 05:10:15 compute-0 podman[98092]: 2025-11-29 05:10:15.912959801 +0000 UTC m=+1.199357902 container remove 8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wiles, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:15 compute-0 systemd[1]: libpod-conmon-8d7fa33b7d38b3652abfeb2bc673a4d5445f7e5fe3086ef59f88fc5bfb99674e.scope: Deactivated successfully.
Nov 29 05:10:15 compute-0 sudo[97948]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:15 compute-0 python3[98283]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393015.3872905-36615-271709434322707/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:10:15 compute-0 sudo[98298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:16 compute-0 sudo[98298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:16 compute-0 sudo[98298]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:16 compute-0 sudo[98323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:16 compute-0 sudo[98323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:16 compute-0 sudo[98323]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:16 compute-0 sudo[98371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:16 compute-0 sudo[98371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:16 compute-0 sudo[98371]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:16 compute-0 sudo[98397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:10:16 compute-0 sudo[98397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:16 compute-0 sudo[98463]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wphvbltwcqaosfzzedhmamjojupgtiwb ; /usr/bin/python3'
Nov 29 05:10:16 compute-0 sudo[98463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:16 compute-0 python3[98470]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:16 compute-0 podman[98485]: 2025-11-29 05:10:16.442700085 +0000 UTC m=+0.029650612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:16 compute-0 podman[98485]: 2025-11-29 05:10:16.692992764 +0000 UTC m=+0.279943241 container create 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:16 compute-0 ceph-mon[75176]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:16 compute-0 podman[98500]: 2025-11-29 05:10:16.727656437 +0000 UTC m=+0.266153475 container create aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:16 compute-0 systemd[1]: Started libpod-conmon-3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e.scope.
Nov 29 05:10:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:16 compute-0 systemd[1]: Started libpod-conmon-aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf.scope.
Nov 29 05:10:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:16 compute-0 podman[98485]: 2025-11-29 05:10:16.779927958 +0000 UTC m=+0.366878475 container init 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9983812993cb03b190dde1b038e1cdc229816227c70a6e87ce0edb3e17334f0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9983812993cb03b190dde1b038e1cdc229816227c70a6e87ce0edb3e17334f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9983812993cb03b190dde1b038e1cdc229816227c70a6e87ce0edb3e17334f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:16 compute-0 podman[98485]: 2025-11-29 05:10:16.796167253 +0000 UTC m=+0.383117690 container start 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:16 compute-0 podman[98500]: 2025-11-29 05:10:16.797995347 +0000 UTC m=+0.336492405 container init aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 05:10:16 compute-0 podman[98485]: 2025-11-29 05:10:16.803178463 +0000 UTC m=+0.390128930 container attach 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 05:10:16 compute-0 festive_hawking[98515]: 167 167
Nov 29 05:10:16 compute-0 systemd[1]: libpod-3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e.scope: Deactivated successfully.
Nov 29 05:10:16 compute-0 podman[98500]: 2025-11-29 05:10:16.71136713 +0000 UTC m=+0.249864198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:16 compute-0 podman[98485]: 2025-11-29 05:10:16.807700773 +0000 UTC m=+0.394651210 container died 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:10:16 compute-0 podman[98500]: 2025-11-29 05:10:16.808955124 +0000 UTC m=+0.347452162 container start aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:10:16 compute-0 podman[98500]: 2025-11-29 05:10:16.820684439 +0000 UTC m=+0.359181487 container attach aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 05:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-66e62aac7a47d5b6e6f2b715249ab0060d6cab42832b261abd99f6295e77ca0c-merged.mount: Deactivated successfully.
Nov 29 05:10:16 compute-0 podman[98485]: 2025-11-29 05:10:16.848037634 +0000 UTC m=+0.434988071 container remove 3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hawking, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:10:16 compute-0 systemd[1]: libpod-conmon-3373fc5d917b8e544ff4dc1ffb93b750fdb3b0cd1770751a465fcbc75b23e15e.scope: Deactivated successfully.
Nov 29 05:10:17 compute-0 podman[98543]: 2025-11-29 05:10:17.029955039 +0000 UTC m=+0.054565448 container create f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 05:10:17 compute-0 systemd[1]: Started libpod-conmon-f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91.scope.
Nov 29 05:10:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc2a92b462beca3a24041e3a490651290f9c37772d4310e31c5a76927394524/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:17 compute-0 podman[98543]: 2025-11-29 05:10:17.007953654 +0000 UTC m=+0.032564103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc2a92b462beca3a24041e3a490651290f9c37772d4310e31c5a76927394524/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc2a92b462beca3a24041e3a490651290f9c37772d4310e31c5a76927394524/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc2a92b462beca3a24041e3a490651290f9c37772d4310e31c5a76927394524/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:17 compute-0 podman[98543]: 2025-11-29 05:10:17.117945009 +0000 UTC m=+0.142555468 container init f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 05:10:17 compute-0 podman[98543]: 2025-11-29 05:10:17.12582737 +0000 UTC m=+0.150437769 container start f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 05:10:17 compute-0 podman[98543]: 2025-11-29 05:10:17.129574732 +0000 UTC m=+0.154185211 container attach f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:10:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:10:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 05:10:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 29 05:10:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 05:10:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 29 05:10:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 05:10:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 29 05:10:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 05:10:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 29 05:10:17 compute-0 ceph-mon[75176]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 05:10:17 compute-0 ceph-mon[75176]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 05:10:17 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0[75172]: 2025-11-29T05:10:17.380+0000 7fad21b30640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 05:10:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 05:10:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e2 new map
Nov 29 05:10:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T05:10:17.381210+0000
                                           modified        2025-11-29T05:10:17.381255+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 29 05:10:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 29 05:10:17 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 29 05:10:17 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 29 05:10:17 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 29 05:10:17 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 29 05:10:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 05:10:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 05:10:17 compute-0 systemd[1]: libpod-aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf.scope: Deactivated successfully.
Nov 29 05:10:17 compute-0 conmon[98520]: conmon aed017992dd71073d5de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf.scope/container/memory.events
Nov 29 05:10:17 compute-0 podman[98500]: 2025-11-29 05:10:17.435187545 +0000 UTC m=+0.973684583 container died aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 05:10:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9983812993cb03b190dde1b038e1cdc229816227c70a6e87ce0edb3e17334f0-merged.mount: Deactivated successfully.
Nov 29 05:10:17 compute-0 podman[98500]: 2025-11-29 05:10:17.483796987 +0000 UTC m=+1.022294035 container remove aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf (image=quay.io/ceph/ceph:v18, name=cool_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 05:10:17 compute-0 systemd[1]: libpod-conmon-aed017992dd71073d5ded25a657679904de793c7169e63c06be1bbe8dd5bc2cf.scope: Deactivated successfully.
Nov 29 05:10:17 compute-0 sudo[98463]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:17 compute-0 sudo[98622]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnkdmicewiyejyjpmwihxgndclhtdzpz ; /usr/bin/python3'
Nov 29 05:10:17 compute-0 sudo[98622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 05:10:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 05:10:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 05:10:17 compute-0 ceph-mon[75176]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 05:10:17 compute-0 ceph-mon[75176]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 05:10:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 05:10:17 compute-0 ceph-mon[75176]: osdmap e29: 3 total, 3 up, 3 in
Nov 29 05:10:17 compute-0 ceph-mon[75176]: fsmap cephfs:0
Nov 29 05:10:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:17 compute-0 python3[98624]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:17 compute-0 boring_rhodes[98560]: {
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:     "0": [
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:         {
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "devices": [
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "/dev/loop3"
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             ],
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_name": "ceph_lv0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_size": "21470642176",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "name": "ceph_lv0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "tags": {
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.crush_device_class": "",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.encrypted": "0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.osd_id": "0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.type": "block",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.vdo": "0"
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             },
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "type": "block",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "vg_name": "ceph_vg0"
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:         }
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:     ],
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:     "1": [
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:         {
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "devices": [
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "/dev/loop4"
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             ],
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_name": "ceph_lv1",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_size": "21470642176",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "name": "ceph_lv1",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "tags": {
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.crush_device_class": "",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.encrypted": "0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.osd_id": "1",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.type": "block",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.vdo": "0"
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             },
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "type": "block",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "vg_name": "ceph_vg1"
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:         }
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:     ],
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:     "2": [
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:         {
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "devices": [
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "/dev/loop5"
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             ],
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_name": "ceph_lv2",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_size": "21470642176",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "name": "ceph_lv2",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "tags": {
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.crush_device_class": "",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.encrypted": "0",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.osd_id": "2",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.type": "block",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:                 "ceph.vdo": "0"
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             },
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "type": "block",
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:             "vg_name": "ceph_vg2"
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:         }
Nov 29 05:10:17 compute-0 boring_rhodes[98560]:     ]
Nov 29 05:10:17 compute-0 boring_rhodes[98560]: }
Nov 29 05:10:17 compute-0 systemd[1]: libpod-f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91.scope: Deactivated successfully.
Nov 29 05:10:17 compute-0 podman[98543]: 2025-11-29 05:10:17.911169882 +0000 UTC m=+0.935780311 container died f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:17 compute-0 podman[98629]: 2025-11-29 05:10:17.943625231 +0000 UTC m=+0.064504570 container create 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 05:10:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dc2a92b462beca3a24041e3a490651290f9c37772d4310e31c5a76927394524-merged.mount: Deactivated successfully.
Nov 29 05:10:17 compute-0 podman[98543]: 2025-11-29 05:10:17.987782555 +0000 UTC m=+1.012392954 container remove f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 05:10:17 compute-0 podman[98629]: 2025-11-29 05:10:17.905887363 +0000 UTC m=+0.026766762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:17 compute-0 systemd[1]: Started libpod-conmon-803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba.scope.
Nov 29 05:10:18 compute-0 systemd[1]: libpod-conmon-f30134cf5f3b9a20109f3bcb2bcb96ad2f28b706cdbad3ad04ad10b1b9870c91.scope: Deactivated successfully.
Nov 29 05:10:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2186b800baee46b65d53adbadf2c6995517af6222601bf838c533ba71819dc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2186b800baee46b65d53adbadf2c6995517af6222601bf838c533ba71819dc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2186b800baee46b65d53adbadf2c6995517af6222601bf838c533ba71819dc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:18 compute-0 sudo[98397]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:18 compute-0 podman[98629]: 2025-11-29 05:10:18.050756017 +0000 UTC m=+0.171635346 container init 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 05:10:18 compute-0 podman[98629]: 2025-11-29 05:10:18.060732109 +0000 UTC m=+0.181611438 container start 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:10:18 compute-0 podman[98629]: 2025-11-29 05:10:18.064186953 +0000 UTC m=+0.185066272 container attach 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:18 compute-0 sudo[98660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:18 compute-0 sudo[98660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:18 compute-0 sudo[98660]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:18 compute-0 sudo[98686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:18 compute-0 sudo[98686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:18 compute-0 sudo[98686]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:18 compute-0 sudo[98711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:18 compute-0 sudo[98711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:18 compute-0 sudo[98711]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:18 compute-0 sudo[98736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:10:18 compute-0 sudo[98736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:10:18 compute-0 ceph-mgr[75473]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 29 05:10:18 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 29 05:10:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 05:10:18 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:18 compute-0 youthful_hopper[98657]: Scheduled mds.cephfs update...
Nov 29 05:10:18 compute-0 systemd[1]: libpod-803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba.scope: Deactivated successfully.
Nov 29 05:10:18 compute-0 podman[98629]: 2025-11-29 05:10:18.670624743 +0000 UTC m=+0.791504052 container died 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:10:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c2186b800baee46b65d53adbadf2c6995517af6222601bf838c533ba71819dc-merged.mount: Deactivated successfully.
Nov 29 05:10:18 compute-0 podman[98629]: 2025-11-29 05:10:18.712393579 +0000 UTC m=+0.833272898 container remove 803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba (image=quay.io/ceph/ceph:v18, name=youthful_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:18 compute-0 ceph-mon[75176]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:18 compute-0 ceph-mon[75176]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:10:18 compute-0 ceph-mon[75176]: Saving service mds.cephfs spec with placement compute-0
Nov 29 05:10:18 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:18 compute-0 sudo[98622]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:18 compute-0 podman[98823]: 2025-11-29 05:10:18.736632009 +0000 UTC m=+0.055293166 container create d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:18 compute-0 systemd[1]: libpod-conmon-803f29934da4c8b64eb2c5637d98c431772c3fe34448b7634d54bc3abae486ba.scope: Deactivated successfully.
Nov 29 05:10:18 compute-0 systemd[1]: Started libpod-conmon-d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4.scope.
Nov 29 05:10:18 compute-0 podman[98823]: 2025-11-29 05:10:18.709683563 +0000 UTC m=+0.028344730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:18 compute-0 podman[98823]: 2025-11-29 05:10:18.824909636 +0000 UTC m=+0.143570813 container init d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:10:18 compute-0 podman[98823]: 2025-11-29 05:10:18.836633631 +0000 UTC m=+0.155294778 container start d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:18 compute-0 podman[98823]: 2025-11-29 05:10:18.840647429 +0000 UTC m=+0.159308606 container attach d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 05:10:18 compute-0 stoic_haibt[98851]: 167 167
Nov 29 05:10:18 compute-0 systemd[1]: libpod-d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4.scope: Deactivated successfully.
Nov 29 05:10:18 compute-0 podman[98823]: 2025-11-29 05:10:18.842120674 +0000 UTC m=+0.160781831 container died d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:10:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f7ecfe192d3f691cfdf0e1a6e7d1244f16b7499cb00dac36d5f0013aad570f8-merged.mount: Deactivated successfully.
Nov 29 05:10:18 compute-0 podman[98823]: 2025-11-29 05:10:18.887747965 +0000 UTC m=+0.206409142 container remove d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_haibt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:18 compute-0 systemd[1]: libpod-conmon-d146437fa9832d3d27afae8f5d1a8e878ad2bcf133108007b732c40a784df8d4.scope: Deactivated successfully.
Nov 29 05:10:19 compute-0 podman[98873]: 2025-11-29 05:10:19.079544559 +0000 UTC m=+0.071665684 container create 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 29 05:10:19 compute-0 systemd[1]: Started libpod-conmon-842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1.scope.
Nov 29 05:10:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:19 compute-0 podman[98873]: 2025-11-29 05:10:19.053231559 +0000 UTC m=+0.045352754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080cabf18536d83e4d64c9487c3ab6fa245daa4842b0f496b858306ee915d27b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080cabf18536d83e4d64c9487c3ab6fa245daa4842b0f496b858306ee915d27b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080cabf18536d83e4d64c9487c3ab6fa245daa4842b0f496b858306ee915d27b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/080cabf18536d83e4d64c9487c3ab6fa245daa4842b0f496b858306ee915d27b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:19 compute-0 podman[98873]: 2025-11-29 05:10:19.207283786 +0000 UTC m=+0.199404921 container init 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:19 compute-0 podman[98873]: 2025-11-29 05:10:19.219971395 +0000 UTC m=+0.212092520 container start 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:10:19 compute-0 podman[98873]: 2025-11-29 05:10:19.223841699 +0000 UTC m=+0.215962854 container attach 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:19 compute-0 sudo[98970]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgxrpnikroijukzfgqtqxgmmnytekkua ; /usr/bin/python3'
Nov 29 05:10:19 compute-0 sudo[98970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:19 compute-0 python3[98972]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 05:10:19 compute-0 sudo[98970]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:19 compute-0 ceph-mon[75176]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:10:19 compute-0 ceph-mon[75176]: Saving service mds.cephfs spec with placement compute-0
Nov 29 05:10:19 compute-0 sudo[99043]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joowrryrbinnxpqgnpchmunelmvevjdn ; /usr/bin/python3'
Nov 29 05:10:19 compute-0 sudo[99043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:19 compute-0 python3[99045]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393019.1871386-36645-146956016647071/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=1cc9e4eb20e7af3f1c9d65ee54a3a3ef5b88c5e3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:10:20 compute-0 sudo[99043]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]: {
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "osd_id": 0,
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "type": "bluestore"
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:     },
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "osd_id": 1,
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "type": "bluestore"
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:     },
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "osd_id": 2,
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:         "type": "bluestore"
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]:     }
Nov 29 05:10:20 compute-0 stoic_proskuriakova[98893]: }
Nov 29 05:10:20 compute-0 systemd[1]: libpod-842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1.scope: Deactivated successfully.
Nov 29 05:10:20 compute-0 systemd[1]: libpod-842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1.scope: Consumed 1.063s CPU time.
Nov 29 05:10:20 compute-0 podman[98873]: 2025-11-29 05:10:20.270759942 +0000 UTC m=+1.262881057 container died 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 05:10:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-080cabf18536d83e4d64c9487c3ab6fa245daa4842b0f496b858306ee915d27b-merged.mount: Deactivated successfully.
Nov 29 05:10:20 compute-0 podman[98873]: 2025-11-29 05:10:20.326324273 +0000 UTC m=+1.318445388 container remove 842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 05:10:20 compute-0 systemd[1]: libpod-conmon-842a433753ba2b5b882201abb390560834f760638a35096803703f8639f7e7e1.scope: Deactivated successfully.
Nov 29 05:10:20 compute-0 sudo[98736]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:20 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:20 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:20 compute-0 sudo[99151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyyzbmuckfkmnpslzlfqvojkmwnnfntg ; /usr/bin/python3'
Nov 29 05:10:20 compute-0 sudo[99151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:20 compute-0 sudo[99118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:20 compute-0 sudo[99118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:20 compute-0 sudo[99118]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:20 compute-0 sudo[99161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:10:20 compute-0 sudo[99161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:20 compute-0 sudo[99161]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:20 compute-0 sudo[99186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:20 compute-0 sudo[99186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:20 compute-0 sudo[99186]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:20 compute-0 python3[99158]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:20 compute-0 sudo[99211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:20 compute-0 sudo[99211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:20 compute-0 sudo[99211]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:20 compute-0 podman[99234]: 2025-11-29 05:10:20.616410989 +0000 UTC m=+0.039064282 container create 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:20 compute-0 systemd[1]: Started libpod-conmon-039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2.scope.
Nov 29 05:10:20 compute-0 sudo[99247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:20 compute-0 sudo[99247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:20 compute-0 sudo[99247]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afebc566f55ff033696361e26b4ab0d46a52091949440e52de19f4ebe38d1da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4afebc566f55ff033696361e26b4ab0d46a52091949440e52de19f4ebe38d1da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:20 compute-0 podman[99234]: 2025-11-29 05:10:20.690072251 +0000 UTC m=+0.112725584 container init 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 05:10:20 compute-0 podman[99234]: 2025-11-29 05:10:20.599081688 +0000 UTC m=+0.021735001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:20 compute-0 podman[99234]: 2025-11-29 05:10:20.698389553 +0000 UTC m=+0.121042866 container start 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:20 compute-0 podman[99234]: 2025-11-29 05:10:20.702677537 +0000 UTC m=+0.125330840 container attach 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:20 compute-0 sudo[99279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:10:20 compute-0 sudo[99279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:20 compute-0 ceph-mon[75176]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:20 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:20 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:21 compute-0 podman[99395]: 2025-11-29 05:10:21.205032676 +0000 UTC m=+0.071360767 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 29 05:10:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/863128948' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 05:10:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/863128948' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 05:10:21 compute-0 systemd[1]: libpod-039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2.scope: Deactivated successfully.
Nov 29 05:10:21 compute-0 podman[99234]: 2025-11-29 05:10:21.31049798 +0000 UTC m=+0.733151273 container died 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:21 compute-0 podman[99395]: 2025-11-29 05:10:21.323755873 +0000 UTC m=+0.190083954 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:10:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4afebc566f55ff033696361e26b4ab0d46a52091949440e52de19f4ebe38d1da-merged.mount: Deactivated successfully.
Nov 29 05:10:21 compute-0 podman[99234]: 2025-11-29 05:10:21.363726115 +0000 UTC m=+0.786379408 container remove 039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2 (image=quay.io/ceph/ceph:v18, name=cool_margulis, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 05:10:21 compute-0 systemd[1]: libpod-conmon-039f402c783c9cc15ee6461aabcdd35a7ade2cf61b3afbbb009699ef50ac38d2.scope: Deactivated successfully.
Nov 29 05:10:21 compute-0 sudo[99151]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:21 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/863128948' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 05:10:21 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/863128948' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 05:10:21 compute-0 sudo[99279]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:10:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:10:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:10:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:21 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ccda369d-5c35-4153-a56b-088eaca9b871 does not exist
Nov 29 05:10:21 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 45cbdfcd-5e73-4a47-9ce2-8b5b951ba83f does not exist
Nov 29 05:10:21 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 6a7dc500-a9c3-4c1f-b91f-1ab31312f442 does not exist
Nov 29 05:10:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:10:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:10:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:10:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:10:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:21 compute-0 sudo[99529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:21 compute-0 sudo[99529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:21 compute-0 sudo[99529]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:21 compute-0 sudo[99554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:21 compute-0 sudo[99554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:21 compute-0 sudo[99554]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:22 compute-0 sudo[99625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqfdirgknhijxwqkgrkcihkkwbcoahmo ; /usr/bin/python3'
Nov 29 05:10:22 compute-0 sudo[99625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:22 compute-0 sudo[99584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:22 compute-0 sudo[99584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:22 compute-0 sudo[99584]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:22 compute-0 sudo[99630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:10:22 compute-0 sudo[99630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:22 compute-0 python3[99628]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:22 compute-0 podman[99656]: 2025-11-29 05:10:22.258734713 +0000 UTC m=+0.043923079 container create 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:22 compute-0 systemd[1]: Started libpod-conmon-07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc.scope.
Nov 29 05:10:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35affc3ccae3c6c331213770449e3068531e63840c368792f8d662ccaa131e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a35affc3ccae3c6c331213770449e3068531e63840c368792f8d662ccaa131e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:22 compute-0 podman[99656]: 2025-11-29 05:10:22.239684061 +0000 UTC m=+0.024872467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:22 compute-0 podman[99656]: 2025-11-29 05:10:22.342665735 +0000 UTC m=+0.127854131 container init 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:22 compute-0 podman[99656]: 2025-11-29 05:10:22.350105055 +0000 UTC m=+0.135293431 container start 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:22 compute-0 podman[99656]: 2025-11-29 05:10:22.363183034 +0000 UTC m=+0.148371430 container attach 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:22 compute-0 podman[99715]: 2025-11-29 05:10:22.496615809 +0000 UTC m=+0.041306785 container create f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 29 05:10:22 compute-0 systemd[1]: Started libpod-conmon-f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e.scope.
Nov 29 05:10:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:22 compute-0 podman[99715]: 2025-11-29 05:10:22.479087423 +0000 UTC m=+0.023778379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 05:10:23 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/878632264' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 05:10:23 compute-0 stupefied_thompson[99682]: 
Nov 29 05:10:23 compute-0 stupefied_thompson[99682]: {"fsid":"93f82912-647c-5e78-b081-707d0a2966d8","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":149,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":29,"num_osds":3,"num_up_osds":3,"osd_up_since":1764392994,"num_in_osds":3,"osd_in_since":1764392965,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83767296,"bytes_avail":64328159232,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T05:09:43.260960+0000","services":{}},"progress_events":{}}
Nov 29 05:10:23 compute-0 podman[99715]: 2025-11-29 05:10:23.26097519 +0000 UTC m=+0.805666126 container init f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:23 compute-0 ceph-mon[75176]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:10:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:10:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:10:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:23 compute-0 podman[99715]: 2025-11-29 05:10:23.267028517 +0000 UTC m=+0.811719453 container start f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 05:10:23 compute-0 bold_lovelace[99731]: 167 167
Nov 29 05:10:23 compute-0 systemd[1]: libpod-f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e.scope: Deactivated successfully.
Nov 29 05:10:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:23 compute-0 podman[99715]: 2025-11-29 05:10:23.276485018 +0000 UTC m=+0.821175974 container attach f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 05:10:23 compute-0 podman[99715]: 2025-11-29 05:10:23.276813975 +0000 UTC m=+0.821504911 container died f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:10:23 compute-0 systemd[1]: libpod-07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc.scope: Deactivated successfully.
Nov 29 05:10:23 compute-0 podman[99656]: 2025-11-29 05:10:23.285935898 +0000 UTC m=+1.071124264 container died 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f650ffe984d44ec4ef644c2787a44e9ee2cc13480aba9011f54331be3322581-merged.mount: Deactivated successfully.
Nov 29 05:10:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a35affc3ccae3c6c331213770449e3068531e63840c368792f8d662ccaa131e1-merged.mount: Deactivated successfully.
Nov 29 05:10:23 compute-0 podman[99715]: 2025-11-29 05:10:23.322669981 +0000 UTC m=+0.867360917 container remove f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 05:10:23 compute-0 systemd[1]: libpod-conmon-f72a6e909064bd592b9e49457038b33326b8e609bbd734a3350da7d156858e9e.scope: Deactivated successfully.
Nov 29 05:10:23 compute-0 podman[99656]: 2025-11-29 05:10:23.364744154 +0000 UTC m=+1.149932530 container remove 07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc (image=quay.io/ceph/ceph:v18, name=stupefied_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:23 compute-0 systemd[1]: libpod-conmon-07a3e01c3726eb5081171074126289694f612cfc487a0477f1b249937e4854fc.scope: Deactivated successfully.
Nov 29 05:10:23 compute-0 sudo[99625]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:23 compute-0 podman[99787]: 2025-11-29 05:10:23.496435207 +0000 UTC m=+0.062587574 container create 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:23 compute-0 systemd[1]: Started libpod-conmon-43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe.scope.
Nov 29 05:10:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:23 compute-0 podman[99787]: 2025-11-29 05:10:23.474516653 +0000 UTC m=+0.040669070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:23 compute-0 sudo[99827]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjfquicljngzyndquocfejosolugrcae ; /usr/bin/python3'
Nov 29 05:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:23 compute-0 sudo[99827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:23 compute-0 podman[99787]: 2025-11-29 05:10:23.583663438 +0000 UTC m=+0.149815805 container init 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 05:10:23 compute-0 podman[99787]: 2025-11-29 05:10:23.596959892 +0000 UTC m=+0.163112269 container start 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 05:10:23 compute-0 podman[99787]: 2025-11-29 05:10:23.600416405 +0000 UTC m=+0.166568782 container attach 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:23 compute-0 python3[99832]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:23 compute-0 podman[99835]: 2025-11-29 05:10:23.850511359 +0000 UTC m=+0.069516032 container create 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 05:10:23 compute-0 systemd[1]: Started libpod-conmon-86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad.scope.
Nov 29 05:10:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf8a4b601b8d6c194625dcadaa448a824937b576a93e047c64d75d0b18cc189/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf8a4b601b8d6c194625dcadaa448a824937b576a93e047c64d75d0b18cc189/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:23 compute-0 podman[99835]: 2025-11-29 05:10:23.818423548 +0000 UTC m=+0.037428271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:23 compute-0 podman[99835]: 2025-11-29 05:10:23.918420451 +0000 UTC m=+0.137425114 container init 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:23 compute-0 podman[99835]: 2025-11-29 05:10:23.930994376 +0000 UTC m=+0.149999009 container start 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:10:23 compute-0 podman[99835]: 2025-11-29 05:10:23.935017804 +0000 UTC m=+0.154022517 container attach 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:24 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/878632264' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 05:10:24 compute-0 ceph-mon[75176]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:10:24 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424060983' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:10:24 compute-0 friendly_golick[99850]: 
Nov 29 05:10:24 compute-0 friendly_golick[99850]: {"epoch":1,"fsid":"93f82912-647c-5e78-b081-707d0a2966d8","modified":"2025-11-29T05:07:49.180526Z","created":"2025-11-29T05:07:49.180526Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 29 05:10:24 compute-0 friendly_golick[99850]: dumped monmap epoch 1
Nov 29 05:10:24 compute-0 systemd[1]: libpod-86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad.scope: Deactivated successfully.
Nov 29 05:10:24 compute-0 podman[99835]: 2025-11-29 05:10:24.613344383 +0000 UTC m=+0.832349186 container died 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 05:10:24 compute-0 eager_thompson[99828]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:10:24 compute-0 eager_thompson[99828]: --> relative data size: 1.0
Nov 29 05:10:24 compute-0 eager_thompson[99828]: --> All data devices are unavailable
Nov 29 05:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdf8a4b601b8d6c194625dcadaa448a824937b576a93e047c64d75d0b18cc189-merged.mount: Deactivated successfully.
Nov 29 05:10:24 compute-0 systemd[1]: libpod-43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe.scope: Deactivated successfully.
Nov 29 05:10:24 compute-0 podman[99835]: 2025-11-29 05:10:24.665715876 +0000 UTC m=+0.884720509 container remove 86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad (image=quay.io/ceph/ceph:v18, name=friendly_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:24 compute-0 podman[99787]: 2025-11-29 05:10:24.666786092 +0000 UTC m=+1.232938459 container died 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:24 compute-0 systemd[1]: libpod-conmon-86b0b2951f879857c2edd41ae2ef56f7715747ba2e80fb1c593b0a0405b4c6ad.scope: Deactivated successfully.
Nov 29 05:10:24 compute-0 sudo[99827]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0adda4f18066699c9b3816acdf83e0f8acc9a3a803cf6c95294b9e8bd83c8c4-merged.mount: Deactivated successfully.
Nov 29 05:10:24 compute-0 podman[99787]: 2025-11-29 05:10:24.714275047 +0000 UTC m=+1.280427414 container remove 43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:10:24 compute-0 systemd[1]: libpod-conmon-43512a112a7a449def30e52e6cbacc22b88ecebb74ea79eb3d63da81632359fe.scope: Deactivated successfully.
Nov 29 05:10:24 compute-0 sudo[99630]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:24 compute-0 sudo[99922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:24 compute-0 sudo[99922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:24 compute-0 sudo[99922]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:24 compute-0 sudo[99947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:24 compute-0 sudo[99947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:24 compute-0 sudo[99947]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:24 compute-0 sudo[99972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:24 compute-0 sudo[99972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:24 compute-0 sudo[99972]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:24 compute-0 sudo[99997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:10:24 compute-0 sudo[99997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:25 compute-0 sudo[100045]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxjzgujglnwnfnkdfxncvxeepzbqwdef ; /usr/bin/python3'
Nov 29 05:10:25 compute-0 sudo[100045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:25 compute-0 python3[100049]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:25 compute-0 podman[100084]: 2025-11-29 05:10:25.264987571 +0000 UTC m=+0.032877890 container create 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 05:10:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:25 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3424060983' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:10:25 compute-0 systemd[1]: Started libpod-conmon-2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9.scope.
Nov 29 05:10:25 compute-0 podman[100098]: 2025-11-29 05:10:25.309222318 +0000 UTC m=+0.043425437 container create d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 05:10:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:25 compute-0 systemd[1]: Started libpod-conmon-d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1.scope.
Nov 29 05:10:25 compute-0 podman[100084]: 2025-11-29 05:10:25.338050538 +0000 UTC m=+0.105940897 container init 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 05:10:25 compute-0 podman[100084]: 2025-11-29 05:10:25.346966426 +0000 UTC m=+0.114856775 container start 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:25 compute-0 podman[100084]: 2025-11-29 05:10:25.250854208 +0000 UTC m=+0.018744547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:25 compute-0 sharp_boyd[100113]: 167 167
Nov 29 05:10:25 compute-0 systemd[1]: libpod-2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9.scope: Deactivated successfully.
Nov 29 05:10:25 compute-0 podman[100084]: 2025-11-29 05:10:25.350619654 +0000 UTC m=+0.118509983 container attach 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:25 compute-0 podman[100084]: 2025-11-29 05:10:25.353885544 +0000 UTC m=+0.121775873 container died 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6ae5cfde459b6841111cd98615ec76df7fcdb5fdc025ed54f5a25aa5ebe88b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6ae5cfde459b6841111cd98615ec76df7fcdb5fdc025ed54f5a25aa5ebe88b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e0782cd66a0fb0b8912eb61acec9cd4090b549be54098dfff2e56e108bf8cf7-merged.mount: Deactivated successfully.
Nov 29 05:10:25 compute-0 podman[100098]: 2025-11-29 05:10:25.292668185 +0000 UTC m=+0.026871324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:25 compute-0 podman[100084]: 2025-11-29 05:10:25.398197632 +0000 UTC m=+0.166087951 container remove 2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_boyd, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 05:10:25 compute-0 podman[100098]: 2025-11-29 05:10:25.414761415 +0000 UTC m=+0.148964534 container init d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:25 compute-0 podman[100098]: 2025-11-29 05:10:25.421921729 +0000 UTC m=+0.156124848 container start d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:25 compute-0 systemd[1]: libpod-conmon-2d391b79838c475eb596318f05577b7e825bd4abd53b34af04ceb843fc7ca3a9.scope: Deactivated successfully.
Nov 29 05:10:25 compute-0 podman[100098]: 2025-11-29 05:10:25.425300571 +0000 UTC m=+0.159503700 container attach d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:25 compute-0 podman[100142]: 2025-11-29 05:10:25.561250298 +0000 UTC m=+0.047434225 container create 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:25 compute-0 systemd[1]: Started libpod-conmon-5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95.scope.
Nov 29 05:10:25 compute-0 podman[100142]: 2025-11-29 05:10:25.540171845 +0000 UTC m=+0.026355772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc31c6763ea701d2d52823822807eb63030adb29c7589bcdd60e55f059b64a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc31c6763ea701d2d52823822807eb63030adb29c7589bcdd60e55f059b64a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc31c6763ea701d2d52823822807eb63030adb29c7589bcdd60e55f059b64a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc31c6763ea701d2d52823822807eb63030adb29c7589bcdd60e55f059b64a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:25 compute-0 podman[100142]: 2025-11-29 05:10:25.680414926 +0000 UTC m=+0.166598873 container init 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:10:25 compute-0 podman[100142]: 2025-11-29 05:10:25.686251788 +0000 UTC m=+0.172435685 container start 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:10:25 compute-0 podman[100142]: 2025-11-29 05:10:25.69004741 +0000 UTC m=+0.176231387 container attach 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 29 05:10:26 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1191791665' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 05:10:26 compute-0 fervent_swartz[100118]: [client.openstack]
Nov 29 05:10:26 compute-0 fervent_swartz[100118]:         key = AQCLfyppAAAAABAAXOcH7jxI2CDW0wmPcSvJrA==
Nov 29 05:10:26 compute-0 fervent_swartz[100118]:         caps mgr = "allow *"
Nov 29 05:10:26 compute-0 fervent_swartz[100118]:         caps mon = "profile rbd"
Nov 29 05:10:26 compute-0 fervent_swartz[100118]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 29 05:10:26 compute-0 systemd[1]: libpod-d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1.scope: Deactivated successfully.
Nov 29 05:10:26 compute-0 conmon[100118]: conmon d080f1f1b70f2ce814d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1.scope/container/memory.events
Nov 29 05:10:26 compute-0 podman[100184]: 2025-11-29 05:10:26.075179037 +0000 UTC m=+0.030698067 container died d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab6ae5cfde459b6841111cd98615ec76df7fcdb5fdc025ed54f5a25aa5ebe88b-merged.mount: Deactivated successfully.
Nov 29 05:10:26 compute-0 podman[100184]: 2025-11-29 05:10:26.119729721 +0000 UTC m=+0.075248771 container remove d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1 (image=quay.io/ceph/ceph:v18, name=fervent_swartz, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:26 compute-0 systemd[1]: libpod-conmon-d080f1f1b70f2ce814d9eb6d98e3bf1e54bd133837ae74fd5cf095fe777f15a1.scope: Deactivated successfully.
Nov 29 05:10:26 compute-0 sudo[100045]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:26 compute-0 ceph-mon[75176]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1191791665' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 05:10:26 compute-0 sleepy_noether[100158]: {
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:     "0": [
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:         {
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "devices": [
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "/dev/loop3"
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             ],
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_name": "ceph_lv0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_size": "21470642176",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "name": "ceph_lv0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "tags": {
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.crush_device_class": "",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.encrypted": "0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.osd_id": "0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.type": "block",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.vdo": "0"
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             },
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "type": "block",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "vg_name": "ceph_vg0"
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:         }
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:     ],
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:     "1": [
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:         {
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "devices": [
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "/dev/loop4"
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             ],
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_name": "ceph_lv1",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_size": "21470642176",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "name": "ceph_lv1",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "tags": {
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.crush_device_class": "",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.encrypted": "0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.osd_id": "1",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.type": "block",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.vdo": "0"
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             },
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "type": "block",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "vg_name": "ceph_vg1"
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:         }
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:     ],
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:     "2": [
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:         {
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "devices": [
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "/dev/loop5"
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             ],
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_name": "ceph_lv2",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_size": "21470642176",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "name": "ceph_lv2",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "tags": {
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.crush_device_class": "",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.encrypted": "0",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.osd_id": "2",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.type": "block",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:                 "ceph.vdo": "0"
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             },
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "type": "block",
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:             "vg_name": "ceph_vg2"
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:         }
Nov 29 05:10:26 compute-0 sleepy_noether[100158]:     ]
Nov 29 05:10:26 compute-0 sleepy_noether[100158]: }
Nov 29 05:10:26 compute-0 systemd[1]: libpod-5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95.scope: Deactivated successfully.
Nov 29 05:10:26 compute-0 podman[100142]: 2025-11-29 05:10:26.479717417 +0000 UTC m=+0.965901304 container died 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:10:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cc31c6763ea701d2d52823822807eb63030adb29c7589bcdd60e55f059b64a3-merged.mount: Deactivated successfully.
Nov 29 05:10:26 compute-0 podman[100142]: 2025-11-29 05:10:26.538158158 +0000 UTC m=+1.024342045 container remove 5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_noether, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:10:26 compute-0 systemd[1]: libpod-conmon-5f2fe2e28e05bccf46c41c242c0d674f7cec601e4d49df34f418799cb86c3e95.scope: Deactivated successfully.
Nov 29 05:10:26 compute-0 sudo[99997]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:26 compute-0 sudo[100216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:26 compute-0 sudo[100216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:26 compute-0 sudo[100216]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:26 compute-0 sudo[100241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:26 compute-0 sudo[100241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:26 compute-0 sudo[100241]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:26 compute-0 sudo[100266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:26 compute-0 sudo[100266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:26 compute-0 sudo[100266]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:26 compute-0 sudo[100291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:10:26 compute-0 sudo[100291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:27 compute-0 podman[100356]: 2025-11-29 05:10:27.306352752 +0000 UTC m=+0.058025092 container create 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:27 compute-0 systemd[1]: Started libpod-conmon-46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c.scope.
Nov 29 05:10:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:27 compute-0 podman[100356]: 2025-11-29 05:10:27.288801925 +0000 UTC m=+0.040474235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:27 compute-0 podman[100356]: 2025-11-29 05:10:27.399922988 +0000 UTC m=+0.151595378 container init 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:27 compute-0 podman[100356]: 2025-11-29 05:10:27.407218795 +0000 UTC m=+0.158891135 container start 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:27 compute-0 podman[100356]: 2025-11-29 05:10:27.411055979 +0000 UTC m=+0.162728329 container attach 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:27 compute-0 ecstatic_liskov[100413]: 167 167
Nov 29 05:10:27 compute-0 systemd[1]: libpod-46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c.scope: Deactivated successfully.
Nov 29 05:10:27 compute-0 podman[100356]: 2025-11-29 05:10:27.412728699 +0000 UTC m=+0.164401109 container died 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-34ad1d2accd645dbd68f810ba9dc3b4b17729dbac0138d76734a5e001d880387-merged.mount: Deactivated successfully.
Nov 29 05:10:27 compute-0 podman[100356]: 2025-11-29 05:10:27.46948095 +0000 UTC m=+0.221153260 container remove 46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 05:10:27 compute-0 systemd[1]: libpod-conmon-46e06e40a892dcced04811213ac32302ce4768271394af8624461d9493d2cd8c.scope: Deactivated successfully.
Nov 29 05:10:27 compute-0 podman[100495]: 2025-11-29 05:10:27.606579004 +0000 UTC m=+0.038751133 container create afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:27 compute-0 systemd[1]: Started libpod-conmon-afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379.scope.
Nov 29 05:10:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a3da6494b8e8f1bc878cbcae630a5036da3aec7018f54cecb799120493818/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a3da6494b8e8f1bc878cbcae630a5036da3aec7018f54cecb799120493818/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a3da6494b8e8f1bc878cbcae630a5036da3aec7018f54cecb799120493818/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d1a3da6494b8e8f1bc878cbcae630a5036da3aec7018f54cecb799120493818/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:27 compute-0 podman[100495]: 2025-11-29 05:10:27.587875429 +0000 UTC m=+0.020047568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:27 compute-0 podman[100495]: 2025-11-29 05:10:27.693841256 +0000 UTC m=+0.126013395 container init afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:27 compute-0 podman[100495]: 2025-11-29 05:10:27.701479502 +0000 UTC m=+0.133651621 container start afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 05:10:27 compute-0 podman[100495]: 2025-11-29 05:10:27.704235129 +0000 UTC m=+0.136407248 container attach afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:27 compute-0 sudo[100566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdjayqziaztslwalzyyzmhaofvydazzv ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764393027.2831461-36717-36707949816144/async_wrapper.py j606962052817 30 /home/zuul/.ansible/tmp/ansible-tmp-1764393027.2831461-36717-36707949816144/AnsiballZ_command.py _'
Nov 29 05:10:27 compute-0 sudo[100566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:27 compute-0 ansible-async_wrapper.py[100568]: Invoked with j606962052817 30 /home/zuul/.ansible/tmp/ansible-tmp-1764393027.2831461-36717-36707949816144/AnsiballZ_command.py _
Nov 29 05:10:27 compute-0 ansible-async_wrapper.py[100571]: Starting module and watcher
Nov 29 05:10:27 compute-0 ansible-async_wrapper.py[100571]: Start watching 100572 (30)
Nov 29 05:10:27 compute-0 ansible-async_wrapper.py[100572]: Start module (100572)
Nov 29 05:10:27 compute-0 ansible-async_wrapper.py[100568]: Return async_wrapper task started.
Nov 29 05:10:27 compute-0 sudo[100566]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:28 compute-0 python3[100573]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:28 compute-0 podman[100574]: 2025-11-29 05:10:28.071556883 +0000 UTC m=+0.052150360 container create 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:28 compute-0 systemd[1]: Started libpod-conmon-364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f.scope.
Nov 29 05:10:28 compute-0 podman[100574]: 2025-11-29 05:10:28.046205497 +0000 UTC m=+0.026799034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff36e0e9ec2dced2154c9c1b43a27dc3be93cbf4691fc9edd8dee78f632c1d1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff36e0e9ec2dced2154c9c1b43a27dc3be93cbf4691fc9edd8dee78f632c1d1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:28 compute-0 podman[100574]: 2025-11-29 05:10:28.168831639 +0000 UTC m=+0.149425206 container init 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:28 compute-0 podman[100574]: 2025-11-29 05:10:28.182731457 +0000 UTC m=+0.163324964 container start 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:28 compute-0 podman[100574]: 2025-11-29 05:10:28.188523608 +0000 UTC m=+0.169117125 container attach 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 29 05:10:28 compute-0 ceph-mon[75176]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:28 compute-0 kind_rosalind[100536]: {
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "osd_id": 0,
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "type": "bluestore"
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:     },
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "osd_id": 1,
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "type": "bluestore"
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:     },
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "osd_id": 2,
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:         "type": "bluestore"
Nov 29 05:10:28 compute-0 kind_rosalind[100536]:     }
Nov 29 05:10:28 compute-0 kind_rosalind[100536]: }
Nov 29 05:10:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:10:28 compute-0 interesting_satoshi[100589]: 
Nov 29 05:10:28 compute-0 interesting_satoshi[100589]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 05:10:28 compute-0 systemd[1]: libpod-364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f.scope: Deactivated successfully.
Nov 29 05:10:28 compute-0 podman[100574]: 2025-11-29 05:10:28.727400975 +0000 UTC m=+0.707994492 container died 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:28 compute-0 systemd[1]: libpod-afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379.scope: Deactivated successfully.
Nov 29 05:10:28 compute-0 systemd[1]: libpod-afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379.scope: Consumed 1.052s CPU time.
Nov 29 05:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-dff36e0e9ec2dced2154c9c1b43a27dc3be93cbf4691fc9edd8dee78f632c1d1-merged.mount: Deactivated successfully.
Nov 29 05:10:28 compute-0 podman[100574]: 2025-11-29 05:10:28.783974351 +0000 UTC m=+0.764567828 container remove 364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f (image=quay.io/ceph/ceph:v18, name=interesting_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:28 compute-0 systemd[1]: libpod-conmon-364bd5568f020e99075fd50aa9b64de1549d55c6c130c7963d722e1983e3587f.scope: Deactivated successfully.
Nov 29 05:10:28 compute-0 podman[100649]: 2025-11-29 05:10:28.808891345 +0000 UTC m=+0.041104039 container died afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:28 compute-0 ansible-async_wrapper.py[100572]: Module complete (100572)
Nov 29 05:10:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d1a3da6494b8e8f1bc878cbcae630a5036da3aec7018f54cecb799120493818-merged.mount: Deactivated successfully.
Nov 29 05:10:28 compute-0 podman[100649]: 2025-11-29 05:10:28.876826503 +0000 UTC m=+0.109039117 container remove afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:28 compute-0 systemd[1]: libpod-conmon-afefc4ffdb36f24070a77c9ce750b19214e45ba996ffdc1bbcde2788ef49e379.scope: Deactivated successfully.
Nov 29 05:10:28 compute-0 sudo[100291]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:28 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:28 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:28 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 124d89dd-391b-4b34-9945-16d1dcae5fd1 (Updating rgw.rgw deployment (+1 -> 1))
Nov 29 05:10:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dwtrck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 05:10:28 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dwtrck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 05:10:28 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dwtrck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 05:10:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 05:10:28 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:28 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:28 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.dwtrck on compute-0
Nov 29 05:10:28 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.dwtrck on compute-0
Nov 29 05:10:29 compute-0 sudo[100691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:29 compute-0 sudo[100691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:29 compute-0 sudo[100691]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:29 compute-0 sudo[100755]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elyujhmhamycxmspmcnfigjjjsmqljyc ; /usr/bin/python3'
Nov 29 05:10:29 compute-0 sudo[100755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:29 compute-0 sudo[100728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:29 compute-0 sudo[100728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:29 compute-0 sudo[100728]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:29 compute-0 sudo[100767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:29 compute-0 sudo[100767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:29 compute-0 sudo[100767]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:29 compute-0 python3[100764]: ansible-ansible.legacy.async_status Invoked with jid=j606962052817.100568 mode=status _async_dir=/root/.ansible_async
Nov 29 05:10:29 compute-0 sudo[100755]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:29 compute-0 sudo[100792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:10:29 compute-0 sudo[100792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:29 compute-0 sudo[100871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzgwbakncakaxyifsjomvmwfeglagjyt ; /usr/bin/python3'
Nov 29 05:10:29 compute-0 sudo[100871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:29 compute-0 python3[100876]: ansible-ansible.legacy.async_status Invoked with jid=j606962052817.100568 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 05:10:29 compute-0 sudo[100871]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:29 compute-0 podman[100905]: 2025-11-29 05:10:29.64871347 +0000 UTC m=+0.054875869 container create a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 05:10:29 compute-0 systemd[1]: Started libpod-conmon-a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e.scope.
Nov 29 05:10:29 compute-0 podman[100905]: 2025-11-29 05:10:29.618941436 +0000 UTC m=+0.025103875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:29 compute-0 podman[100905]: 2025-11-29 05:10:29.74254156 +0000 UTC m=+0.148704039 container init a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:10:29 compute-0 podman[100905]: 2025-11-29 05:10:29.756774207 +0000 UTC m=+0.162936566 container start a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 05:10:29 compute-0 podman[100905]: 2025-11-29 05:10:29.760611468 +0000 UTC m=+0.166773917 container attach a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 29 05:10:29 compute-0 nervous_meitner[100921]: 167 167
Nov 29 05:10:29 compute-0 systemd[1]: libpod-a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e.scope: Deactivated successfully.
Nov 29 05:10:29 compute-0 podman[100905]: 2025-11-29 05:10:29.764638604 +0000 UTC m=+0.170801033 container died a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:10:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-46677b3988112b0e01d31b364b7e9a83f10c8413d4ab13c6412789d90aaa225a-merged.mount: Deactivated successfully.
Nov 29 05:10:29 compute-0 podman[100905]: 2025-11-29 05:10:29.817452534 +0000 UTC m=+0.223614923 container remove a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_meitner, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:29 compute-0 systemd[1]: libpod-conmon-a080cc69db41caeb3bf37f125a84ca49b429cc00d6109f3348316ca28507712e.scope: Deactivated successfully.
Nov 29 05:10:29 compute-0 systemd[1]: Reloading.
Nov 29 05:10:29 compute-0 ceph-mon[75176]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:10:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dwtrck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 05:10:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dwtrck", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 05:10:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:29 compute-0 ceph-mon[75176]: Deploying daemon rgw.rgw.compute-0.dwtrck on compute-0
Nov 29 05:10:29 compute-0 systemd-rc-local-generator[100967]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:10:29 compute-0 systemd-sysv-generator[100970]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:10:30 compute-0 sudo[100998]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfcwbhtfrlvqhsiwhxmsguffuxxpfgnv ; /usr/bin/python3'
Nov 29 05:10:30 compute-0 sudo[100998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:30 compute-0 systemd[1]: Reloading.
Nov 29 05:10:30 compute-0 systemd-rc-local-generator[101030]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:10:30 compute-0 systemd-sysv-generator[101033]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:10:30 compute-0 python3[101002]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:30 compute-0 podman[101041]: 2025-11-29 05:10:30.384244596 +0000 UTC m=+0.048762264 container create 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:30 compute-0 podman[101041]: 2025-11-29 05:10:30.36032723 +0000 UTC m=+0.024844928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:30 compute-0 systemd[1]: Started libpod-conmon-97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929.scope.
Nov 29 05:10:30 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.dwtrck for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:10:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1908ffe131b7552f48f85ca272914ea130999d9c76e0f899bd4d00f080f8e1a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1908ffe131b7552f48f85ca272914ea130999d9c76e0f899bd4d00f080f8e1a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:30 compute-0 podman[101041]: 2025-11-29 05:10:30.50312817 +0000 UTC m=+0.167645838 container init 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:10:30 compute-0 podman[101041]: 2025-11-29 05:10:30.519330524 +0000 UTC m=+0.183848192 container start 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:30 compute-0 podman[101041]: 2025-11-29 05:10:30.523004441 +0000 UTC m=+0.187522139 container attach 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:10:30 compute-0 podman[101111]: 2025-11-29 05:10:30.697679464 +0000 UTC m=+0.057958592 container create bb930ede36ba1337dc325a1db70732694da557ce5f77bc8602a423b1ea8ce970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-rgw-rgw-compute-0-dwtrck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5635b4c74c64e4be14a69cf78c4a8e21b1b1537ece3ecb6b3651c61391f3127/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5635b4c74c64e4be14a69cf78c4a8e21b1b1537ece3ecb6b3651c61391f3127/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5635b4c74c64e4be14a69cf78c4a8e21b1b1537ece3ecb6b3651c61391f3127/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5635b4c74c64e4be14a69cf78c4a8e21b1b1537ece3ecb6b3651c61391f3127/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.dwtrck supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:30 compute-0 podman[101111]: 2025-11-29 05:10:30.755983224 +0000 UTC m=+0.116262382 container init bb930ede36ba1337dc325a1db70732694da557ce5f77bc8602a423b1ea8ce970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-rgw-rgw-compute-0-dwtrck, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:30 compute-0 podman[101111]: 2025-11-29 05:10:30.760810408 +0000 UTC m=+0.121089536 container start bb930ede36ba1337dc325a1db70732694da557ce5f77bc8602a423b1ea8ce970 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-rgw-rgw-compute-0-dwtrck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 05:10:30 compute-0 bash[101111]: bb930ede36ba1337dc325a1db70732694da557ce5f77bc8602a423b1ea8ce970
Nov 29 05:10:30 compute-0 podman[101111]: 2025-11-29 05:10:30.679652987 +0000 UTC m=+0.039932135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:30 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.dwtrck for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:10:30 compute-0 sudo[100792]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:30 compute-0 radosgw[101131]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 05:10:30 compute-0 radosgw[101131]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 05:10:30 compute-0 radosgw[101131]: framework: beast
Nov 29 05:10:30 compute-0 radosgw[101131]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 29 05:10:30 compute-0 radosgw[101131]: init_numa not setting numa affinity
Nov 29 05:10:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 05:10:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:30 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 124d89dd-391b-4b34-9945-16d1dcae5fd1 (Updating rgw.rgw deployment (+1 -> 1))
Nov 29 05:10:30 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 124d89dd-391b-4b34-9945-16d1dcae5fd1 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Nov 29 05:10:30 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 29 05:10:30 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 29 05:10:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 05:10:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 05:10:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:30 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev e2fe9c50-bb63-4196-ba25-35b29159b9ea (Updating mds.cephfs deployment (+1 -> 1))
Nov 29 05:10:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjtuko", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 05:10:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjtuko", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 05:10:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjtuko", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 05:10:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:30 compute-0 ceph-mgr[75473]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.mjtuko on compute-0
Nov 29 05:10:30 compute-0 ceph-mgr[75473]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.mjtuko on compute-0
Nov 29 05:10:30 compute-0 sudo[101212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:30 compute-0 sudo[101212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:30 compute-0 sudo[101212]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:30 compute-0 ceph-mon[75176]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjtuko", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 05:10:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.mjtuko", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 05:10:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:30 compute-0 sudo[101237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:31 compute-0 sudo[101237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:31 compute-0 sudo[101237]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:31 compute-0 sudo[101262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:31 compute-0 sudo[101262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:31 compute-0 sudo[101262]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:10:31 compute-0 determined_nobel[101058]: 
Nov 29 05:10:31 compute-0 determined_nobel[101058]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 05:10:31 compute-0 systemd[1]: libpod-97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929.scope: Deactivated successfully.
Nov 29 05:10:31 compute-0 podman[101041]: 2025-11-29 05:10:31.117786076 +0000 UTC m=+0.782303764 container died 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 05:10:31 compute-0 sudo[101287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 93f82912-647c-5e78-b081-707d0a2966d8
Nov 29 05:10:31 compute-0 sudo[101287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1908ffe131b7552f48f85ca272914ea130999d9c76e0f899bd4d00f080f8e1a2-merged.mount: Deactivated successfully.
Nov 29 05:10:31 compute-0 podman[101041]: 2025-11-29 05:10:31.171572969 +0000 UTC m=+0.836090637 container remove 97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929 (image=quay.io/ceph/ceph:v18, name=determined_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:31 compute-0 systemd[1]: libpod-conmon-97d0df3fd49b9c1bfbb0411376163e518e4c22bb52bccfb4aa12b96af3422929.scope: Deactivated successfully.
Nov 29 05:10:31 compute-0 sudo[100998]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:31 compute-0 ceph-mgr[75473]: [progress INFO root] Writing back 4 completed events
Nov 29 05:10:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 05:10:31 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:31 compute-0 podman[101368]: 2025-11-29 05:10:31.591764563 +0000 UTC m=+0.068880681 container create a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:10:31 compute-0 systemd[1]: Started libpod-conmon-a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc.scope.
Nov 29 05:10:31 compute-0 podman[101368]: 2025-11-29 05:10:31.566817543 +0000 UTC m=+0.043933661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:31 compute-0 podman[101368]: 2025-11-29 05:10:31.687893998 +0000 UTC m=+0.165010076 container init a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 05:10:31 compute-0 podman[101368]: 2025-11-29 05:10:31.695557929 +0000 UTC m=+0.172674007 container start a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 29 05:10:31 compute-0 podman[101368]: 2025-11-29 05:10:31.700184459 +0000 UTC m=+0.177300567 container attach a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:31 compute-0 laughing_ptolemy[101384]: 167 167
Nov 29 05:10:31 compute-0 systemd[1]: libpod-a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc.scope: Deactivated successfully.
Nov 29 05:10:31 compute-0 podman[101368]: 2025-11-29 05:10:31.70402784 +0000 UTC m=+0.181143958 container died a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6913ee9b0fbb5408668bce852dfb081e5e3292d8f5361e05f5330e63c04a37c3-merged.mount: Deactivated successfully.
Nov 29 05:10:31 compute-0 podman[101368]: 2025-11-29 05:10:31.757152607 +0000 UTC m=+0.234268725 container remove a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:31 compute-0 systemd[1]: libpod-conmon-a7167ec3101ea3c13fa4cd3181c650753b168d09bcc6268319f46c0bcfcc4cfc.scope: Deactivated successfully.
Nov 29 05:10:31 compute-0 systemd[1]: Reloading.
Nov 29 05:10:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 29 05:10:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 29 05:10:31 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 29 05:10:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 29 05:10:31 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 05:10:31 compute-0 systemd-sysv-generator[101430]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:10:31 compute-0 systemd-rc-local-generator[101426]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:10:31 compute-0 ceph-mon[75176]: Saving service rgw.rgw spec with placement compute-0
Nov 29 05:10:31 compute-0 ceph-mon[75176]: Deploying daemon mds.cephfs.compute-0.mjtuko on compute-0
Nov 29 05:10:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:31 compute-0 ceph-mon[75176]: osdmap e30: 3 total, 3 up, 3 in
Nov 29 05:10:31 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 05:10:32 compute-0 sudo[101461]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uipjzzufnouxchcvnsrhxvsbtteffszt ; /usr/bin/python3'
Nov 29 05:10:32 compute-0 sudo[101461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:32 compute-0 systemd[1]: Reloading.
Nov 29 05:10:32 compute-0 systemd-rc-local-generator[101493]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:10:32 compute-0 systemd-sysv-generator[101496]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:10:32 compute-0 python3[101465]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:32 compute-0 podman[101504]: 2025-11-29 05:10:32.394373047 +0000 UTC m=+0.063499864 container create 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 05:10:32 compute-0 podman[101504]: 2025-11-29 05:10:32.364403908 +0000 UTC m=+0.033530785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:32 compute-0 systemd[1]: Started libpod-conmon-6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909.scope.
Nov 29 05:10:32 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.mjtuko for 93f82912-647c-5e78-b081-707d0a2966d8...
Nov 29 05:10:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838d98921549cf1341840f9c81150c12264c84cd026ba9271e96f71858be2a6f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/838d98921549cf1341840f9c81150c12264c84cd026ba9271e96f71858be2a6f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:32 compute-0 podman[101504]: 2025-11-29 05:10:32.51702915 +0000 UTC m=+0.186155947 container init 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:32 compute-0 podman[101504]: 2025-11-29 05:10:32.529381472 +0000 UTC m=+0.198508269 container start 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 05:10:32 compute-0 podman[101504]: 2025-11-29 05:10:32.533459518 +0000 UTC m=+0.202586375 container attach 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:32 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 30 pg[8.0( empty local-lis/les=0/0 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:32 compute-0 podman[101573]: 2025-11-29 05:10:32.699200881 +0000 UTC m=+0.037963519 container create cd3cd449d854be23414a2f004c36f29760a53224357c3f4a29c773076c036416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mds-cephfs-compute-0-mjtuko, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3331cf059792b3f5b0647d44cc632c9c6aff73afb064754bc26086b86adb2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3331cf059792b3f5b0647d44cc632c9c6aff73afb064754bc26086b86adb2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3331cf059792b3f5b0647d44cc632c9c6aff73afb064754bc26086b86adb2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3331cf059792b3f5b0647d44cc632c9c6aff73afb064754bc26086b86adb2d/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.mjtuko supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:32 compute-0 podman[101573]: 2025-11-29 05:10:32.757369747 +0000 UTC m=+0.096132405 container init cd3cd449d854be23414a2f004c36f29760a53224357c3f4a29c773076c036416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mds-cephfs-compute-0-mjtuko, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:10:32 compute-0 podman[101573]: 2025-11-29 05:10:32.763288488 +0000 UTC m=+0.102051126 container start cd3cd449d854be23414a2f004c36f29760a53224357c3f4a29c773076c036416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mds-cephfs-compute-0-mjtuko, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:32 compute-0 bash[101573]: cd3cd449d854be23414a2f004c36f29760a53224357c3f4a29c773076c036416
Nov 29 05:10:32 compute-0 podman[101573]: 2025-11-29 05:10:32.681654675 +0000 UTC m=+0.020417333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:32 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.mjtuko for 93f82912-647c-5e78-b081-707d0a2966d8.
Nov 29 05:10:32 compute-0 ceph-mds[101593]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 05:10:32 compute-0 ceph-mds[101593]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 05:10:32 compute-0 ceph-mds[101593]: main not setting numa affinity
Nov 29 05:10:32 compute-0 ceph-mds[101593]: pidfile_write: ignore empty --pid-file
Nov 29 05:10:32 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mds-cephfs-compute-0-mjtuko[101589]: starting mds.cephfs.compute-0.mjtuko at 
Nov 29 05:10:32 compute-0 sudo[101287]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:32 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko Updating MDS map to version 2 from mon.0
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:32 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev e2fe9c50-bb63-4196-ba25-35b29159b9ea (Updating mds.cephfs deployment (+1 -> 1))
Nov 29 05:10:32 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event e2fe9c50-bb63-4196-ba25-35b29159b9ea (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 29 05:10:32 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=30/31 n=0 ec=30/30 lis/c=0/0 les/c/f=0/0/0 sis=30) [1] r=0 lpr=30 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:32 compute-0 ansible-async_wrapper.py[100571]: Done in kid B.
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:32 compute-0 sudo[101632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:32 compute-0 sudo[101632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:32 compute-0 sudo[101632]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e3 new map
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T05:10:17.381210+0000
                                           modified        2025-11-29T05:10:17.381255+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.mjtuko{-1:14265} state up:standby seq 1 addr [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 05:10:32 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko Updating MDS map to version 3 from mon.0
Nov 29 05:10:32 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko Monitors have assigned me to become a standby.
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] up:boot
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] as mds.0
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.mjtuko assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.mjtuko"} v 0) v1
Nov 29 05:10:32 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.mjtuko"}]: dispatch
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e3 all = 0
Nov 29 05:10:32 compute-0 ceph-mon[75176]: from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e4 new map
Nov 29 05:10:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T05:10:17.381210+0000
                                           modified        2025-11-29T05:10:32.991647+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14265}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.mjtuko{0:14265} state up:creating seq 1 addr [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 29 05:10:32 compute-0 ceph-mon[75176]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:32 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 05:10:32 compute-0 ceph-mon[75176]: osdmap e31: 3 total, 3 up, 3 in
Nov 29 05:10:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:32 compute-0 sudo[101658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko Updating MDS map to version 4 from mon.0
Nov 29 05:10:33 compute-0 sudo[101658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:33 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.mjtuko=up:creating}
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x1
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x100
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x600
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x601
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x602
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x603
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x604
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x605
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x606
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x607
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x608
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.cache creating system inode with ino:0x609
Nov 29 05:10:33 compute-0 sudo[101658]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:33 compute-0 ceph-mds[101593]: mds.0.4 creating_done
Nov 29 05:10:33 compute-0 ceph-mon[75176]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.mjtuko is now active in filesystem cephfs as rank 0
Nov 29 05:10:33 compute-0 sudo[101692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:10:33 compute-0 sudo[101692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:33 compute-0 sudo[101692]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:33 compute-0 agitated_nobel[101521]: 
Nov 29 05:10:33 compute-0 agitated_nobel[101521]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 29 05:10:33 compute-0 systemd[1]: libpod-6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909.scope: Deactivated successfully.
Nov 29 05:10:33 compute-0 podman[101504]: 2025-11-29 05:10:33.116552238 +0000 UTC m=+0.785679055 container died 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-838d98921549cf1341840f9c81150c12264c84cd026ba9271e96f71858be2a6f-merged.mount: Deactivated successfully.
Nov 29 05:10:33 compute-0 sudo[101719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:33 compute-0 sudo[101719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:33 compute-0 sudo[101719]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:33 compute-0 podman[101504]: 2025-11-29 05:10:33.167020952 +0000 UTC m=+0.836147739 container remove 6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909 (image=quay.io/ceph/ceph:v18, name=agitated_nobel, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:33 compute-0 systemd[1]: libpod-conmon-6363833f41d487a26c6a9b26a81fd501a1e8a7c8013ddf01d6bd3a9efe7e1909.scope: Deactivated successfully.
Nov 29 05:10:33 compute-0 sudo[101461]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:33 compute-0 sudo[101755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:33 compute-0 sudo[101755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:33 compute-0 sudo[101755]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:33 compute-0 sudo[101780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:10:33 compute-0 sudo[101780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v78: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 29 05:10:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 29 05:10:33 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 29 05:10:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 05:10:33 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 05:10:33 compute-0 podman[101877]: 2025-11-29 05:10:33.915664189 +0000 UTC m=+0.085349241 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mds.? [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] up:boot
Nov 29 05:10:34 compute-0 ceph-mon[75176]: daemon mds.cephfs.compute-0.mjtuko assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 05:10:34 compute-0 ceph-mon[75176]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 05:10:34 compute-0 ceph-mon[75176]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 05:10:34 compute-0 ceph-mon[75176]: Cluster is now healthy
Nov 29 05:10:34 compute-0 ceph-mon[75176]: fsmap cephfs:0 1 up:standby
Nov 29 05:10:34 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.mjtuko"}]: dispatch
Nov 29 05:10:34 compute-0 ceph-mon[75176]: fsmap cephfs:1 {0=cephfs.compute-0.mjtuko=up:creating}
Nov 29 05:10:34 compute-0 ceph-mon[75176]: daemon mds.cephfs.compute-0.mjtuko is now active in filesystem cephfs as rank 0
Nov 29 05:10:34 compute-0 ceph-mon[75176]: from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:10:34 compute-0 ceph-mon[75176]: osdmap e32: 3 total, 3 up, 3 in
Nov 29 05:10:34 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e5 new map
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-29T05:10:17.381210+0000
                                           modified        2025-11-29T05:10:34.000457+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14265}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.mjtuko{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 29 05:10:34 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko Updating MDS map to version 5 from mon.0
Nov 29 05:10:34 compute-0 ceph-mds[101593]: mds.0.4 handle_mds_map i am now mds.0.4
Nov 29 05:10:34 compute-0 ceph-mds[101593]: mds.0.4 handle_mds_map state change up:creating --> up:active
Nov 29 05:10:34 compute-0 ceph-mds[101593]: mds.0.4 recovery_done -- successful recovery!
Nov 29 05:10:34 compute-0 ceph-mds[101593]: mds.0.4 active_start
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] up:active
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.mjtuko=up:active}
Nov 29 05:10:34 compute-0 podman[101877]: 2025-11-29 05:10:34.057856334 +0000 UTC m=+0.227541416 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:34 compute-0 sudo[101954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qymorlykgzkjepwzsbzdbpdycwcigcye ; /usr/bin/python3'
Nov 29 05:10:34 compute-0 sudo[101954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:34 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 32 pg[9.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:34 compute-0 python3[101958]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:34 compute-0 podman[101984]: 2025-11-29 05:10:34.475029306 +0000 UTC m=+0.060358739 container create bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 05:10:34 compute-0 systemd[1]: Started libpod-conmon-bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263.scope.
Nov 29 05:10:34 compute-0 podman[101984]: 2025-11-29 05:10:34.448462328 +0000 UTC m=+0.033791751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97a7c6f5a5beea04715e1dd0a774792754e8aec34440a9c1680d4b7738806da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97a7c6f5a5beea04715e1dd0a774792754e8aec34440a9c1680d4b7738806da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:34 compute-0 podman[101984]: 2025-11-29 05:10:34.575067634 +0000 UTC m=+0.160397107 container init bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:34 compute-0 podman[101984]: 2025-11-29 05:10:34.584067597 +0000 UTC m=+0.169397000 container start bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:10:34 compute-0 podman[101984]: 2025-11-29 05:10:34.58756659 +0000 UTC m=+0.172896013 container attach bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 29 05:10:34 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [1] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:34 compute-0 sudo[101780]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:34 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev b70d627d-eff3-440d-826e-45927c56fcd3 does not exist
Nov 29 05:10:34 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a89dac84-5858-4e7b-b8f1-df5ab24619a4 does not exist
Nov 29 05:10:34 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 07731cc7-e030-48e0-96ed-d7fa05d11779 does not exist
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:10:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:34 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:35 compute-0 ceph-mon[75176]: pgmap v78: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:35 compute-0 ceph-mon[75176]: mds.? [v2:192.168.122.100:6814/189089471,v1:192.168.122.100:6815/189089471] up:active
Nov 29 05:10:35 compute-0 ceph-mon[75176]: fsmap cephfs:1 {0=cephfs.compute-0.mjtuko=up:active}
Nov 29 05:10:35 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 05:10:35 compute-0 ceph-mon[75176]: osdmap e33: 3 total, 3 up, 3 in
Nov 29 05:10:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:10:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:10:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:10:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:35 compute-0 sudo[102101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:35 compute-0 sudo[102101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:35 compute-0 sudo[102101]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:10:35 compute-0 hopeful_hofstadter[102023]: 
Nov 29 05:10:35 compute-0 hopeful_hofstadter[102023]: [{"container_id": "8c3d78b49174", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.43%", "created": "2025-11-29T05:09:06.936213Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-29T05:09:06.990283Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.913632Z", "memory_usage": 11597250, "ports": [], "service_name": "crash", "started": "2025-11-29T05:09:06.842550Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@crash.compute-0", "version": "18.2.7"}, {"container_id": "cd3cd449d854", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "9.35%", "created": "2025-11-29T05:10:32.781198Z", "daemon_id": "cephfs.compute-0.mjtuko", "daemon_name": "mds.cephfs.compute-0.mjtuko", "daemon_type": "mds", "events": ["2025-11-29T05:10:32.834335Z daemon:mds.cephfs.compute-0.mjtuko [INFO] \"Deployed mds.cephfs.compute-0.mjtuko on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.914379Z", "memory_usage": 18360565, "ports": [], "service_name": "mds.cephfs", "started": "2025-11-29T05:10:32.685552Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@mds.cephfs.compute-0.mjtuko", "version": "18.2.7"}, {"container_id": "342af346b419", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "29.17%", "created": "2025-11-29T05:07:56.139663Z", "daemon_id": "compute-0.csskcz", "daemon_name": "mgr.compute-0.csskcz", "daemon_type": "mgr", "events": ["2025-11-29T05:09:11.542415Z daemon:mgr.compute-0.csskcz [INFO] \"Reconfigured mgr.compute-0.csskcz on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.913497Z", "memory_usage": 548090675, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-29T05:07:56.051515Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@mgr.compute-0.csskcz", "version": "18.2.7"}, {"container_id": "8221d7b65f9d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.26%", "created": "2025-11-29T05:07:51.261549Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-29T05:09:10.799324Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.913216Z", "memory_request": 2147483648, "memory_usage": 42446356, "ports": [], "service_name": "mon", "started": "2025-11-29T05:07:53.908548Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@mon.compute-0", "version": "18.2.7"}, {"container_id": "a8f7d50ad538", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.73%", "created": "2025-11-29T05:09:36.470871Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-29T05:09:36.520527Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.913763Z", "memory_request": 4294967296, "memory_usage": 60146319, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T05:09:36.322007Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@osd.0", "version": "18.2.7"}, {"container_id": "82f057625789", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.81%", "created": "2025-11-29T05:09:41.082980Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-29T05:09:41.168704Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.913953Z", "memory_request": 4294967296, "memory_usage": 57378078, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T05:09:40.928970Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@osd.1", "version": "18.2.7"}, {"container_id": "5bc94574df1b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.01%", "created": "2025-11-29T05:09:46.298104Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-29T05:09:46.430954Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T05:10:34.914088Z", "memory_request": 4294967296, "memory_usage": 55993958, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-29T05:09:46.102905Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@osd.2", "version": "18.2.7"}, {"container_id": "bb930ede36ba", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.77%", "created": "2025-11-29T05:10:30.786152Z", "daemon_id": "rgw.compute-0.dwtrck", "daemon_name": "rgw.rgw.compute-0.dwtrck", "daemon_type": "rgw", "events": ["2025-11-29T05:10:30.842523Z daemon:rgw.rgw.compute-0.dwtrck [INFO] \"Deployed rgw.rgw.compute-0.dwtrck on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-11-29T05:10:34.914213Z", "memory_usage": 18685624, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-11-29T05:10:30.684104Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-93f82912-647c-5e78-b081-707d0a2966d8@rgw.rgw.compute-0.dwtrck", "version": "18.2.7"}]
Nov 29 05:10:35 compute-0 systemd[1]: libpod-bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263.scope: Deactivated successfully.
Nov 29 05:10:35 compute-0 podman[101984]: 2025-11-29 05:10:35.123944324 +0000 UTC m=+0.709273717 container died bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:10:35 compute-0 sudo[102126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:35 compute-0 sudo[102126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c97a7c6f5a5beea04715e1dd0a774792754e8aec34440a9c1680d4b7738806da-merged.mount: Deactivated successfully.
Nov 29 05:10:35 compute-0 sudo[102126]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:35 compute-0 podman[101984]: 2025-11-29 05:10:35.188501561 +0000 UTC m=+0.773830984 container remove bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263 (image=quay.io/ceph/ceph:v18, name=hopeful_hofstadter, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:10:35 compute-0 systemd[1]: libpod-conmon-bf9a2b56c5fdc46f94b034f3123c806b2f630d7b84b4baf898fc81dda0c49263.scope: Deactivated successfully.
Nov 29 05:10:35 compute-0 sudo[101954]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:35 compute-0 sudo[102162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:35 compute-0 sudo[102162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:35 compute-0 sudo[102162]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v81: 9 pgs: 1 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 29 05:10:35 compute-0 sudo[102190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:10:35 compute-0 sudo[102190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:35 compute-0 rsyslogd[1003]: message too long (8588) with configured size 8096, begin of message is: [{"container_id": "8c3d78b49174", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 05:10:35 compute-0 sshd-session[101838]: Connection closed by 101.47.141.125 port 42042 [preauth]
Nov 29 05:10:35 compute-0 podman[102255]: 2025-11-29 05:10:35.766427978 +0000 UTC m=+0.074175376 container create b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:10:35 compute-0 systemd[1]: Started libpod-conmon-b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741.scope.
Nov 29 05:10:35 compute-0 podman[102255]: 2025-11-29 05:10:35.734914272 +0000 UTC m=+0.042661710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 29 05:10:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 29 05:10:35 compute-0 podman[102255]: 2025-11-29 05:10:35.868210236 +0000 UTC m=+0.175957674 container init b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:35 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 29 05:10:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 05:10:35 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 05:10:35 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 34 pg[10.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:35 compute-0 podman[102255]: 2025-11-29 05:10:35.877985148 +0000 UTC m=+0.185732506 container start b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:10:35 compute-0 podman[102255]: 2025-11-29 05:10:35.881355218 +0000 UTC m=+0.189102676 container attach b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 05:10:35 compute-0 distracted_moore[102271]: 167 167
Nov 29 05:10:35 compute-0 systemd[1]: libpod-b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741.scope: Deactivated successfully.
Nov 29 05:10:35 compute-0 podman[102255]: 2025-11-29 05:10:35.88440846 +0000 UTC m=+0.192155828 container died b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 29 05:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-82edb18f9b8fb982637aaa6322b97b44b0f477cc74fee865e087390c794c7fd7-merged.mount: Deactivated successfully.
Nov 29 05:10:35 compute-0 podman[102255]: 2025-11-29 05:10:35.924756095 +0000 UTC m=+0.232503483 container remove b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_moore, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:10:35 compute-0 systemd[1]: libpod-conmon-b332a20270e4040c24edb8cc5f4267a6d2737972354868da6b70b9f9bd3f5741.scope: Deactivated successfully.
Nov 29 05:10:36 compute-0 sudo[102327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrpswepnmhlhjuejeldvokbytmapqxlc ; /usr/bin/python3'
Nov 29 05:10:36 compute-0 sudo[102327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:36 compute-0 podman[102305]: 2025-11-29 05:10:36.106572757 +0000 UTC m=+0.051733674 container create 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:10:36 compute-0 systemd[1]: Started libpod-conmon-79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93.scope.
Nov 29 05:10:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:36 compute-0 podman[102305]: 2025-11-29 05:10:36.084909065 +0000 UTC m=+0.030069992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:36 compute-0 podman[102305]: 2025-11-29 05:10:36.184742927 +0000 UTC m=+0.129903874 container init 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:36 compute-0 podman[102305]: 2025-11-29 05:10:36.192678555 +0000 UTC m=+0.137839482 container start 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:10:36 compute-0 podman[102305]: 2025-11-29 05:10:36.196423174 +0000 UTC m=+0.141584131 container attach 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:10:36 compute-0 python3[102333]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:36 compute-0 podman[102345]: 2025-11-29 05:10:36.29172408 +0000 UTC m=+0.048133721 container create 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 05:10:36 compute-0 systemd[1]: Started libpod-conmon-6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db.scope.
Nov 29 05:10:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b62a457bf4b5f8856b2937906175881f667d785a7ba325f942c29171444720/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b62a457bf4b5f8856b2937906175881f667d785a7ba325f942c29171444720/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:36 compute-0 podman[102345]: 2025-11-29 05:10:36.272048584 +0000 UTC m=+0.028458255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:36 compute-0 ceph-mgr[75473]: [progress INFO root] Writing back 5 completed events
Nov 29 05:10:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 05:10:36 compute-0 podman[102345]: 2025-11-29 05:10:36.371726613 +0000 UTC m=+0.128136274 container init 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:36 compute-0 podman[102345]: 2025-11-29 05:10:36.380707945 +0000 UTC m=+0.137117586 container start 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:36 compute-0 podman[102345]: 2025-11-29 05:10:36.384352812 +0000 UTC m=+0.140762443 container attach 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 29 05:10:36 compute-0 ceph-mon[75176]: from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 05:10:36 compute-0 ceph-mon[75176]: pgmap v81: 9 pgs: 1 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 29 05:10:36 compute-0 ceph-mon[75176]: osdmap e34: 3 total, 3 up, 3 in
Nov 29 05:10:36 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 05:10:36 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 05:10:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 29 05:10:36 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 29 05:10:36 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [2] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 05:10:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807082650' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 05:10:37 compute-0 awesome_jones[102361]: 
Nov 29 05:10:37 compute-0 awesome_jones[102361]: {"fsid":"93f82912-647c-5e78-b081-707d0a2966d8","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":162,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":35,"num_osds":3,"num_up_osds":3,"osd_up_since":1764392994,"num_in_osds":3,"osd_in_since":1764392965,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":8},{"state_name":"unknown","count":1}],"num_pgs":9,"num_pools":9,"num_objects":27,"data_bytes":463028,"bytes_used":83898368,"bytes_avail":64328028160,"bytes_total":64411926528,"unknown_pgs_ratio":0.1111111119389534,"read_bytes_sec":1023,"write_bytes_sec":4606,"read_op_per_sec":0,"write_op_per_sec":11},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.mjtuko","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T05:09:43.260960+0000","services":{}},"progress_events":{}}
Nov 29 05:10:37 compute-0 systemd[1]: libpod-6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db.scope: Deactivated successfully.
Nov 29 05:10:37 compute-0 podman[102345]: 2025-11-29 05:10:37.039767413 +0000 UTC m=+0.796177044 container died 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5b62a457bf4b5f8856b2937906175881f667d785a7ba325f942c29171444720-merged.mount: Deactivated successfully.
Nov 29 05:10:37 compute-0 podman[102345]: 2025-11-29 05:10:37.07770493 +0000 UTC m=+0.834114571 container remove 6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db (image=quay.io/ceph/ceph:v18, name=awesome_jones, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:37 compute-0 systemd[1]: libpod-conmon-6e7fbb26733a30c84e75404c75912a39cbe96df223ee81b813afd8bdf1fb26db.scope: Deactivated successfully.
Nov 29 05:10:37 compute-0 sudo[102327]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:37 compute-0 thirsty_almeida[102340]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:10:37 compute-0 thirsty_almeida[102340]: --> relative data size: 1.0
Nov 29 05:10:37 compute-0 thirsty_almeida[102340]: --> All data devices are unavailable
Nov 29 05:10:37 compute-0 systemd[1]: libpod-79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93.scope: Deactivated successfully.
Nov 29 05:10:37 compute-0 podman[102305]: 2025-11-29 05:10:37.228886728 +0000 UTC m=+1.174047635 container died 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 05:10:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-12f83e104edf85e56aa03bfa9a8947607bcb114e566583ae9e65e7854f62136b-merged.mount: Deactivated successfully.
Nov 29 05:10:37 compute-0 podman[102305]: 2025-11-29 05:10:37.275919951 +0000 UTC m=+1.221080858 container remove 79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_almeida, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v84: 10 pgs: 2 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 29 05:10:37 compute-0 systemd[1]: libpod-conmon-79fda564a50352a4658c739a10063841caaca06274e08939cc3900de31e4cf93.scope: Deactivated successfully.
Nov 29 05:10:37 compute-0 sudo[102190]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:37 compute-0 sudo[102449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:37 compute-0 sudo[102449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:37 compute-0 sudo[102449]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:37 compute-0 sudo[102474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:37 compute-0 sudo[102474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:37 compute-0 sudo[102474]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:37 compute-0 sudo[102499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:37 compute-0 sudo[102499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:37 compute-0 sudo[102499]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:37 compute-0 sudo[102524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:10:37 compute-0 sudo[102524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:37 compute-0 podman[102589]: 2025-11-29 05:10:37.811022445 +0000 UTC m=+0.039235400 container create 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:37 compute-0 systemd[1]: Started libpod-conmon-51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19.scope.
Nov 29 05:10:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:37 compute-0 podman[102589]: 2025-11-29 05:10:37.88096313 +0000 UTC m=+0.109176075 container init 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 29 05:10:37 compute-0 podman[102589]: 2025-11-29 05:10:37.886945321 +0000 UTC m=+0.115158236 container start 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:37 compute-0 podman[102589]: 2025-11-29 05:10:37.793126521 +0000 UTC m=+0.021339466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:37 compute-0 podman[102589]: 2025-11-29 05:10:37.890849764 +0000 UTC m=+0.119062689 container attach 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 05:10:37 compute-0 systemd[1]: libpod-51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19.scope: Deactivated successfully.
Nov 29 05:10:37 compute-0 festive_jemison[102604]: 167 167
Nov 29 05:10:37 compute-0 podman[102589]: 2025-11-29 05:10:37.89239425 +0000 UTC m=+0.120607255 container died 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 05:10:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 29 05:10:37 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1901650890' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 05:10:37 compute-0 ceph-mon[75176]: osdmap e35: 3 total, 3 up, 3 in
Nov 29 05:10:37 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1807082650' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 05:10:37 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:10:37 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 29 05:10:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 05:10:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 05:10:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e89c830c1aba67b304c22968853a9bc41c8e8b33ff2bc6113b4194b9bcbd441a-merged.mount: Deactivated successfully.
Nov 29 05:10:37 compute-0 podman[102589]: 2025-11-29 05:10:37.937579189 +0000 UTC m=+0.165792134 container remove 51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_jemison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:10:37 compute-0 systemd[1]: libpod-conmon-51b2e698a80b69418209a1f80a21c1dba166f0bc6ead35ad5104b80d8d2dba19.scope: Deactivated successfully.
Nov 29 05:10:38 compute-0 sudo[102647]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdnzqrxnznszmeoqrqdocfukzhtouesg ; /usr/bin/python3'
Nov 29 05:10:38 compute-0 sudo[102647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:38 compute-0 podman[102655]: 2025-11-29 05:10:38.096685515 +0000 UTC m=+0.043107521 container create 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:38 compute-0 systemd[1]: Started libpod-conmon-75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2.scope.
Nov 29 05:10:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5b7fdc8066dc2c43510c98277aa9a497e8ece4c100d53dee82b421e5fb13c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5b7fdc8066dc2c43510c98277aa9a497e8ece4c100d53dee82b421e5fb13c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5b7fdc8066dc2c43510c98277aa9a497e8ece4c100d53dee82b421e5fb13c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e5b7fdc8066dc2c43510c98277aa9a497e8ece4c100d53dee82b421e5fb13c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:38 compute-0 podman[102655]: 2025-11-29 05:10:38.07662326 +0000 UTC m=+0.023045326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:38 compute-0 podman[102655]: 2025-11-29 05:10:38.186114471 +0000 UTC m=+0.132536587 container init 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 05:10:38 compute-0 python3[102650]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:38 compute-0 podman[102655]: 2025-11-29 05:10:38.198247518 +0000 UTC m=+0.144669574 container start 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:38 compute-0 podman[102655]: 2025-11-29 05:10:38.202677294 +0000 UTC m=+0.149099300 container attach 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:38 compute-0 podman[102677]: 2025-11-29 05:10:38.272521916 +0000 UTC m=+0.052168956 container create 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:38 compute-0 systemd[1]: Started libpod-conmon-5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf.scope.
Nov 29 05:10:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa094add065a17083e90a81c0edc43e2ae53e1c90c9f41f6e9f2d6c273e9f00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1aa094add065a17083e90a81c0edc43e2ae53e1c90c9f41f6e9f2d6c273e9f00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:38 compute-0 podman[102677]: 2025-11-29 05:10:38.252656406 +0000 UTC m=+0.032303436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:38 compute-0 podman[102677]: 2025-11-29 05:10:38.356284548 +0000 UTC m=+0.135931568 container init 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 05:10:38 compute-0 podman[102677]: 2025-11-29 05:10:38.361817059 +0000 UTC m=+0.141464109 container start 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:38 compute-0 podman[102677]: 2025-11-29 05:10:38.36567158 +0000 UTC m=+0.145318610 container attach 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 05:10:38 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 36 pg[11.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 05:10:38 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/778170416' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:10:38 compute-0 crazy_kalam[102692]: 
Nov 29 05:10:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 29 05:10:38 compute-0 systemd[1]: libpod-5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf.scope: Deactivated successfully.
Nov 29 05:10:38 compute-0 crazy_kalam[102692]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.dwtrck","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 29 05:10:38 compute-0 ceph-mon[75176]: pgmap v84: 10 pgs: 2 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 29 05:10:38 compute-0 podman[102677]: 2025-11-29 05:10:38.897339773 +0000 UTC m=+0.676986833 container died 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:38 compute-0 ceph-mon[75176]: osdmap e36: 3 total, 3 up, 3 in
Nov 29 05:10:38 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 05:10:38 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/778170416' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 05:10:38 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 05:10:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 29 05:10:38 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 29 05:10:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 05:10:38 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 05:10:38 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1aa094add065a17083e90a81c0edc43e2ae53e1c90c9f41f6e9f2d6c273e9f00-merged.mount: Deactivated successfully.
Nov 29 05:10:38 compute-0 podman[102677]: 2025-11-29 05:10:38.95216108 +0000 UTC m=+0.731808080 container remove 5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf (image=quay.io/ceph/ceph:v18, name=crazy_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:10:38 compute-0 zen_poincare[102672]: {
Nov 29 05:10:38 compute-0 zen_poincare[102672]:     "0": [
Nov 29 05:10:38 compute-0 zen_poincare[102672]:         {
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "devices": [
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "/dev/loop3"
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             ],
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_name": "ceph_lv0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_size": "21470642176",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "name": "ceph_lv0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "tags": {
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.crush_device_class": "",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.encrypted": "0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.osd_id": "0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.type": "block",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.vdo": "0"
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             },
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "type": "block",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "vg_name": "ceph_vg0"
Nov 29 05:10:38 compute-0 zen_poincare[102672]:         }
Nov 29 05:10:38 compute-0 zen_poincare[102672]:     ],
Nov 29 05:10:38 compute-0 zen_poincare[102672]:     "1": [
Nov 29 05:10:38 compute-0 zen_poincare[102672]:         {
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "devices": [
Nov 29 05:10:38 compute-0 sudo[102647]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "/dev/loop4"
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             ],
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_name": "ceph_lv1",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_size": "21470642176",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "name": "ceph_lv1",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "tags": {
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.crush_device_class": "",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.encrypted": "0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.osd_id": "1",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.type": "block",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.vdo": "0"
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             },
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "type": "block",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "vg_name": "ceph_vg1"
Nov 29 05:10:38 compute-0 zen_poincare[102672]:         }
Nov 29 05:10:38 compute-0 zen_poincare[102672]:     ],
Nov 29 05:10:38 compute-0 zen_poincare[102672]:     "2": [
Nov 29 05:10:38 compute-0 zen_poincare[102672]:         {
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "devices": [
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "/dev/loop5"
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             ],
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_name": "ceph_lv2",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_size": "21470642176",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "name": "ceph_lv2",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "tags": {
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.crush_device_class": "",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.encrypted": "0",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.osd_id": "2",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.type": "block",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:                 "ceph.vdo": "0"
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             },
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "type": "block",
Nov 29 05:10:38 compute-0 zen_poincare[102672]:             "vg_name": "ceph_vg2"
Nov 29 05:10:38 compute-0 zen_poincare[102672]:         }
Nov 29 05:10:38 compute-0 zen_poincare[102672]:     ]
Nov 29 05:10:38 compute-0 zen_poincare[102672]: }
Nov 29 05:10:38 compute-0 systemd[1]: libpod-conmon-5fcb7bdd6348ff98a8b883f9d7dcd597d37289abb10a4252359fd73d917098bf.scope: Deactivated successfully.
Nov 29 05:10:39 compute-0 systemd[1]: libpod-75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2.scope: Deactivated successfully.
Nov 29 05:10:39 compute-0 podman[102655]: 2025-11-29 05:10:39.001550899 +0000 UTC m=+0.947972935 container died 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e5b7fdc8066dc2c43510c98277aa9a497e8ece4c100d53dee82b421e5fb13c2-merged.mount: Deactivated successfully.
Nov 29 05:10:39 compute-0 podman[102655]: 2025-11-29 05:10:39.052952695 +0000 UTC m=+0.999374701 container remove 75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_poincare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:39 compute-0 systemd[1]: libpod-conmon-75fc408e912cb37d2d36c2bab5028019d31bfce8adb82bb61de822bbc9b7bef2.scope: Deactivated successfully.
Nov 29 05:10:39 compute-0 sudo[102524]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:39 compute-0 sudo[102744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:39 compute-0 sudo[102744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:39 compute-0 sudo[102744]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:39 compute-0 sudo[102769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:39 compute-0 sudo[102769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:39 compute-0 sudo[102769]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 1 unknown, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Nov 29 05:10:39 compute-0 sudo[102794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:39 compute-0 sudo[102794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:39 compute-0 sudo[102794]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:39 compute-0 sudo[102819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:10:39 compute-0 sudo[102819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:39 compute-0 podman[102884]: 2025-11-29 05:10:39.787100629 +0000 UTC m=+0.065701386 container create 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:39 compute-0 systemd[1]: Started libpod-conmon-8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db.scope.
Nov 29 05:10:39 compute-0 podman[102884]: 2025-11-29 05:10:39.76095377 +0000 UTC m=+0.039554587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:39 compute-0 podman[102884]: 2025-11-29 05:10:39.885043367 +0000 UTC m=+0.163644184 container init 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:39 compute-0 podman[102884]: 2025-11-29 05:10:39.892447642 +0000 UTC m=+0.171048399 container start 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:39 compute-0 podman[102884]: 2025-11-29 05:10:39.896779454 +0000 UTC m=+0.175380211 container attach 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:10:39 compute-0 infallible_chatelet[102900]: 167 167
Nov 29 05:10:39 compute-0 systemd[1]: libpod-8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db.scope: Deactivated successfully.
Nov 29 05:10:39 compute-0 podman[102884]: 2025-11-29 05:10:39.898153977 +0000 UTC m=+0.176754744 container died 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:10:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 29 05:10:39 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 05:10:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 29 05:10:39 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 29 05:10:39 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 05:10:39 compute-0 ceph-mon[75176]: osdmap e37: 3 total, 3 up, 3 in
Nov 29 05:10:39 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 05:10:39 compute-0 sudo[102929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agkcgltngayyaxihwthmulxyobsaimut ; /usr/bin/python3'
Nov 29 05:10:39 compute-0 sudo[102929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7de168f7b92ba7dc707b051e8ab279e3ca2c3fd7cbe43018e920ccfcaa01744e-merged.mount: Deactivated successfully.
Nov 29 05:10:39 compute-0 podman[102884]: 2025-11-29 05:10:39.955509984 +0000 UTC m=+0.234110711 container remove 8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:10:39 compute-0 systemd[1]: libpod-conmon-8524faaa7dbd83dba2b69556e3413a0bd2537fc779993c9b67b4fcff898af1db.scope: Deactivated successfully.
Nov 29 05:10:40 compute-0 radosgw[101131]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 05:10:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-rgw-rgw-compute-0-dwtrck[101127]: 2025-11-29T05:10:40.057+0000 7f8fe2607940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 05:10:40 compute-0 radosgw[101131]: framework: beast
Nov 29 05:10:40 compute-0 radosgw[101131]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 05:10:40 compute-0 radosgw[101131]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 05:10:40 compute-0 python3[102941]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:40 compute-0 radosgw[101131]: starting handler: beast
Nov 29 05:10:40 compute-0 radosgw[101131]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 05:10:40 compute-0 podman[102976]: 2025-11-29 05:10:40.150334565 +0000 UTC m=+0.047894165 container create f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 05:10:40 compute-0 radosgw[101131]: mgrc service_daemon_register rgw.14273 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.dwtrck,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=f35e7436-e8c2-46d1-be58-9961c1fdcc6c,zone_name=default,zonegroup_id=467ce4d9-6945-496b-b23e-b9cf98f6161a,zonegroup_name=default}
Nov 29 05:10:40 compute-0 podman[102982]: 2025-11-29 05:10:40.181081793 +0000 UTC m=+0.058890265 container create fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 05:10:40 compute-0 systemd[1]: Started libpod-conmon-f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281.scope.
Nov 29 05:10:40 compute-0 systemd[1]: Started libpod-conmon-fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292.scope.
Nov 29 05:10:40 compute-0 podman[102976]: 2025-11-29 05:10:40.133437396 +0000 UTC m=+0.030997026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d12f2de3f8ca41e0f06f0cbf9b7bd185fe267ad460bd3cb83e8d92b086206d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d12f2de3f8ca41e0f06f0cbf9b7bd185fe267ad460bd3cb83e8d92b086206d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d12f2de3f8ca41e0f06f0cbf9b7bd185fe267ad460bd3cb83e8d92b086206d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d12f2de3f8ca41e0f06f0cbf9b7bd185fe267ad460bd3cb83e8d92b086206d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a065102c01d931ae2635bb9c0583c313eba74241ca3e497025e7cd5dcf6b60/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a065102c01d931ae2635bb9c0583c313eba74241ca3e497025e7cd5dcf6b60/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:40 compute-0 podman[102982]: 2025-11-29 05:10:40.16024079 +0000 UTC m=+0.038049292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:40 compute-0 podman[102976]: 2025-11-29 05:10:40.263135364 +0000 UTC m=+0.160694984 container init f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 05:10:40 compute-0 podman[102976]: 2025-11-29 05:10:40.271732498 +0000 UTC m=+0.169292098 container start f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:40 compute-0 podman[102982]: 2025-11-29 05:10:40.275991899 +0000 UTC m=+0.153800441 container init fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:40 compute-0 podman[102982]: 2025-11-29 05:10:40.28406906 +0000 UTC m=+0.161877502 container start fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:10:40 compute-0 podman[102976]: 2025-11-29 05:10:40.285316399 +0000 UTC m=+0.182876079 container attach f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:40 compute-0 podman[102982]: 2025-11-29 05:10:40.290168224 +0000 UTC m=+0.167976676 container attach fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 05:10:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 29 05:10:40 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3788946129' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 05:10:40 compute-0 clever_black[103524]: mimic
Nov 29 05:10:40 compute-0 systemd[1]: libpod-fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292.scope: Deactivated successfully.
Nov 29 05:10:40 compute-0 podman[102982]: 2025-11-29 05:10:40.826312032 +0000 UTC m=+0.704120484 container died fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 05:10:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-37a065102c01d931ae2635bb9c0583c313eba74241ca3e497025e7cd5dcf6b60-merged.mount: Deactivated successfully.
Nov 29 05:10:40 compute-0 podman[102982]: 2025-11-29 05:10:40.871187434 +0000 UTC m=+0.748995886 container remove fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292 (image=quay.io/ceph/ceph:v18, name=clever_black, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 05:10:40 compute-0 systemd[1]: libpod-conmon-fdb7c23e3af23a188fb683702ccd755bb04a8a5c0a048ddf47ee73952ceb3292.scope: Deactivated successfully.
Nov 29 05:10:40 compute-0 sudo[102929]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:40 compute-0 ceph-mon[75176]: pgmap v87: 11 pgs: 1 unknown, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Nov 29 05:10:40 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/46760473' entity='client.rgw.rgw.compute-0.dwtrck' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 05:10:40 compute-0 ceph-mon[75176]: osdmap e38: 3 total, 3 up, 3 in
Nov 29 05:10:40 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3788946129' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 05:10:41 compute-0 funny_swanson[103522]: {
Nov 29 05:10:41 compute-0 funny_swanson[103522]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "osd_id": 0,
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "type": "bluestore"
Nov 29 05:10:41 compute-0 funny_swanson[103522]:     },
Nov 29 05:10:41 compute-0 funny_swanson[103522]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "osd_id": 1,
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "type": "bluestore"
Nov 29 05:10:41 compute-0 funny_swanson[103522]:     },
Nov 29 05:10:41 compute-0 funny_swanson[103522]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "osd_id": 2,
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:10:41 compute-0 funny_swanson[103522]:         "type": "bluestore"
Nov 29 05:10:41 compute-0 funny_swanson[103522]:     }
Nov 29 05:10:41 compute-0 funny_swanson[103522]: }
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:10:41
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'default.rgw.control', 'images', 'backups', 'cephfs.cephfs.meta', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 232 B/s rd, 465 B/s wr, 1 op/s
Nov 29 05:10:41 compute-0 systemd[1]: libpod-f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281.scope: Deactivated successfully.
Nov 29 05:10:41 compute-0 systemd[1]: libpod-f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281.scope: Consumed 1.004s CPU time.
Nov 29 05:10:41 compute-0 podman[103593]: 2025-11-29 05:10:41.332980622 +0000 UTC m=+0.027565722 container died f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 1)
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 05:10:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 05:10:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:10:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-84d12f2de3f8ca41e0f06f0cbf9b7bd185fe267ad460bd3cb83e8d92b086206d-merged.mount: Deactivated successfully.
Nov 29 05:10:41 compute-0 podman[103593]: 2025-11-29 05:10:41.396113446 +0000 UTC m=+0.090698546 container remove f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:10:41 compute-0 systemd[1]: libpod-conmon-f4434ee52ff6b56db7a9c0582f38d81f2012a22d9d46a714d051df2ef67be281.scope: Deactivated successfully.
Nov 29 05:10:41 compute-0 sudo[102819]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 789eabe4-50d0-4c54-9022-1207bfba532e does not exist
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev e9aa2db9-645c-4bf9-9053-fd985b25d602 does not exist
Nov 29 05:10:41 compute-0 sudo[103608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:41 compute-0 sudo[103608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:41 compute-0 sudo[103608]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:41 compute-0 sudo[103633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:10:41 compute-0 sudo[103633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:41 compute-0 sudo[103633]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:41 compute-0 sudo[103658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:41 compute-0 sudo[103658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:41 compute-0 sudo[103658]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:41 compute-0 sudo[103727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edqdkvgbustwmrkjxgtlobmfumymalbp ; /usr/bin/python3'
Nov 29 05:10:41 compute-0 sudo[103727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:10:41 compute-0 sudo[103689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:41 compute-0 sudo[103689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:41 compute-0 sudo[103689]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:41 compute-0 sudo[103734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:41 compute-0 sudo[103734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:41 compute-0 sudo[103734]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 29 05:10:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 29 05:10:41 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 29 05:10:41 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev b54068fd-06f2-486d-9164-d647b988f2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 05:10:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 05:10:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:41 compute-0 python3[103731]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:10:41 compute-0 sudo[103759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:10:41 compute-0 sudo[103759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:42 compute-0 podman[103782]: 2025-11-29 05:10:42.033585533 +0000 UTC m=+0.059028009 container create 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Nov 29 05:10:42 compute-0 systemd[1]: Started libpod-conmon-59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88.scope.
Nov 29 05:10:42 compute-0 podman[103782]: 2025-11-29 05:10:42.009661687 +0000 UTC m=+0.035104163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:10:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4219e571bd90d3100e6cda2037394426800c8acaf03ea7bc5b7eb27c4af5bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b4219e571bd90d3100e6cda2037394426800c8acaf03ea7bc5b7eb27c4af5bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:42 compute-0 podman[103782]: 2025-11-29 05:10:42.138027614 +0000 UTC m=+0.163470100 container init 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:42 compute-0 podman[103782]: 2025-11-29 05:10:42.146135706 +0000 UTC m=+0.171578202 container start 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 05:10:42 compute-0 podman[103782]: 2025-11-29 05:10:42.150868148 +0000 UTC m=+0.176310614 container attach 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 05:10:42 compute-0 podman[103872]: 2025-11-29 05:10:42.506750341 +0000 UTC m=+0.093522055 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:10:42 compute-0 podman[103872]: 2025-11-29 05:10:42.634712709 +0000 UTC m=+0.221484413 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:10:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 29 05:10:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779896496' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 05:10:42 compute-0 interesting_antonelli[103801]: 
Nov 29 05:10:42 compute-0 interesting_antonelli[103801]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Nov 29 05:10:42 compute-0 systemd[1]: libpod-59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88.scope: Deactivated successfully.
Nov 29 05:10:42 compute-0 podman[103782]: 2025-11-29 05:10:42.77086595 +0000 UTC m=+0.796308416 container died 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:10:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b4219e571bd90d3100e6cda2037394426800c8acaf03ea7bc5b7eb27c4af5bd-merged.mount: Deactivated successfully.
Nov 29 05:10:42 compute-0 podman[103782]: 2025-11-29 05:10:42.813148671 +0000 UTC m=+0.838591137 container remove 59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88 (image=quay.io/ceph/ceph:v18, name=interesting_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 05:10:42 compute-0 systemd[1]: libpod-conmon-59c5b5526d00a942d67e1ef04b0c897bd4fb80e11b31bbec0efc328da5546c88.scope: Deactivated successfully.
Nov 29 05:10:42 compute-0 sudo[103727]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 29 05:10:42 compute-0 ceph-mon[75176]: pgmap v89: 11 pgs: 11 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 232 B/s rd, 465 B/s wr, 1 op/s
Nov 29 05:10:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:42 compute-0 ceph-mon[75176]: osdmap e39: 3 total, 3 up, 3 in
Nov 29 05:10:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:42 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3779896496' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 05:10:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 29 05:10:42 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 29 05:10:42 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 09d9423c-3037-46fc-8fab-602c085244a8 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 05:10:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 05:10:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v92: 11 pgs: 11 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 234 B/s rd, 468 B/s wr, 1 op/s
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:43 compute-0 sudo[103759]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:43 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 92af31f6-6f04-4e62-831d-0412490461d8 does not exist
Nov 29 05:10:43 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 84ac48af-dc68-424f-af83-b342a0417e49 does not exist
Nov 29 05:10:43 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 4de3bebe-6145-407b-a805-ae61f4d5459d does not exist
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:43 compute-0 sudo[104057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:43 compute-0 sudo[104057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:43 compute-0 sudo[104057]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:43 compute-0 sudo[104082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:43 compute-0 sudo[104082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:43 compute-0 sudo[104082]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:43 compute-0 sudo[104107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:43 compute-0 sudo[104107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:43 compute-0 sudo[104107]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:43 compute-0 sudo[104132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:10:43 compute-0 sudo[104132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:43 compute-0 ceph-mon[75176]: osdmap e40: 3 total, 3 up, 3 in
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 29 05:10:43 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 60b47820-dec1-4bfb-aa36-6ff5c734a866 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 05:10:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 05:10:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:43 compute-0 podman[104196]: 2025-11-29 05:10:43.991322813 +0000 UTC m=+0.056651482 container create bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:10:44 compute-0 systemd[1]: Started libpod-conmon-bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8.scope.
Nov 29 05:10:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:44 compute-0 podman[104196]: 2025-11-29 05:10:43.973786648 +0000 UTC m=+0.039115307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:44 compute-0 podman[104196]: 2025-11-29 05:10:44.074616444 +0000 UTC m=+0.139945153 container init bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:10:44 compute-0 podman[104196]: 2025-11-29 05:10:44.082146933 +0000 UTC m=+0.147475592 container start bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:10:44 compute-0 ecstatic_visvesvaraya[104212]: 167 167
Nov 29 05:10:44 compute-0 podman[104196]: 2025-11-29 05:10:44.086957216 +0000 UTC m=+0.152285905 container attach bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:10:44 compute-0 systemd[1]: libpod-bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8.scope: Deactivated successfully.
Nov 29 05:10:44 compute-0 podman[104196]: 2025-11-29 05:10:44.087993761 +0000 UTC m=+0.153322420 container died bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:10:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9e66ff3f7586be1c0261798741531d110f7cf68e652c3c25822fd8ad2a58bde-merged.mount: Deactivated successfully.
Nov 29 05:10:44 compute-0 podman[104196]: 2025-11-29 05:10:44.123674435 +0000 UTC m=+0.189003084 container remove bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:44 compute-0 systemd[1]: libpod-conmon-bf011d4deb93fa171357badb0e3f6b1611cc905e0d8cab4e71c78c94c05f14d8.scope: Deactivated successfully.
Nov 29 05:10:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:44 compute-0 podman[104238]: 2025-11-29 05:10:44.320611116 +0000 UTC m=+0.047957216 container create b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 29 05:10:44 compute-0 systemd[1]: Started libpod-conmon-b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1.scope.
Nov 29 05:10:44 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:44 compute-0 podman[104238]: 2025-11-29 05:10:44.295917851 +0000 UTC m=+0.023263991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:44 compute-0 podman[104238]: 2025-11-29 05:10:44.398473219 +0000 UTC m=+0.125819369 container init b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:44 compute-0 podman[104238]: 2025-11-29 05:10:44.406459467 +0000 UTC m=+0.133805587 container start b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 05:10:44 compute-0 podman[104238]: 2025-11-29 05:10:44.410155435 +0000 UTC m=+0.137501565 container attach b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:10:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 29 05:10:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 29 05:10:44 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 29 05:10:44 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 24cf8aab-88f0-41ac-9c03-50177383e1e1 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 05:10:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 29 05:10:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 05:10:44 compute-0 ceph-mon[75176]: pgmap v92: 11 pgs: 11 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 234 B/s rd, 468 B/s wr, 1 op/s
Nov 29 05:10:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:44 compute-0 ceph-mon[75176]: osdmap e41: 3 total, 3 up, 3 in
Nov 29 05:10:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v95: 73 pgs: 62 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 8.0 KiB/s wr, 394 op/s
Nov 29 05:10:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 05:10:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 05:10:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:45 compute-0 angry_poitras[104255]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:10:45 compute-0 angry_poitras[104255]: --> relative data size: 1.0
Nov 29 05:10:45 compute-0 angry_poitras[104255]: --> All data devices are unavailable
Nov 29 05:10:45 compute-0 systemd[1]: libpod-b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1.scope: Deactivated successfully.
Nov 29 05:10:45 compute-0 systemd[1]: libpod-b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1.scope: Consumed 1.021s CPU time.
Nov 29 05:10:45 compute-0 conmon[104255]: conmon b69b545600e3f0cf2cdc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1.scope/container/memory.events
Nov 29 05:10:45 compute-0 podman[104238]: 2025-11-29 05:10:45.478031547 +0000 UTC m=+1.205377647 container died b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-577be1320c369e2a29e7a94a5492c2eb1b465e715d9ce98bcefd2c19dd49c325-merged.mount: Deactivated successfully.
Nov 29 05:10:45 compute-0 podman[104238]: 2025-11-29 05:10:45.551337461 +0000 UTC m=+1.278683601 container remove b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:10:45 compute-0 systemd[1]: libpod-conmon-b69b545600e3f0cf2cdc700932c8755b84cdc4232161e1a74beead75651b62e1.scope: Deactivated successfully.
Nov 29 05:10:45 compute-0 sudo[104132]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:45 compute-0 sudo[104298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:45 compute-0 sudo[104298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:45 compute-0 sudo[104298]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:45 compute-0 sudo[104323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:45 compute-0 sudo[104323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:45 compute-0 sudo[104323]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:45 compute-0 sudo[104348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:45 compute-0 sudo[104348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:45 compute-0 sudo[104348]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:45 compute-0 sudo[104373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:10:45 compute-0 sudo[104373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 29 05:10:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 05:10:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 05:10:45 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 29 05:10:45 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=43 pruub=9.690950394s) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active pruub 77.746055603s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:10:45 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 40938a28-7d35-4d1d-acc7-268a3723f906 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 05:10:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 05:10:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:46 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=43 pruub=9.690950394s) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown pruub 77.746055603s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:46 compute-0 ceph-mon[75176]: osdmap e42: 3 total, 3 up, 3 in
Nov 29 05:10:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 05:10:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=19/20 n=0 ec=12/12 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=13.106151581s) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active pruub 71.429924011s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=13.106102943s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active pruub 71.429954529s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=19/20 n=0 ec=12/12 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=13.106151581s) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown pruub 71.429924011s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[5.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=13.106102943s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown pruub 71.429954529s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.e( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.14( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1a( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1e( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.1( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=19/20 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:46 compute-0 podman[104439]: 2025-11-29 05:10:46.268446692 +0000 UTC m=+0.041738269 container create b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 05:10:46 compute-0 systemd[1]: Started libpod-conmon-b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f.scope.
Nov 29 05:10:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:46 compute-0 podman[104439]: 2025-11-29 05:10:46.337303901 +0000 UTC m=+0.110595498 container init b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:10:46 compute-0 podman[104439]: 2025-11-29 05:10:46.343240462 +0000 UTC m=+0.116532039 container start b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:10:46 compute-0 podman[104439]: 2025-11-29 05:10:46.3465385 +0000 UTC m=+0.119830097 container attach b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:10:46 compute-0 zen_swirles[104455]: 167 167
Nov 29 05:10:46 compute-0 systemd[1]: libpod-b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f.scope: Deactivated successfully.
Nov 29 05:10:46 compute-0 podman[104439]: 2025-11-29 05:10:46.347280628 +0000 UTC m=+0.120572215 container died b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 29 05:10:46 compute-0 podman[104439]: 2025-11-29 05:10:46.252197058 +0000 UTC m=+0.025488655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7680914fbf19876e14cd51468caae358c6f8f7a083ad79ca4c9780af84c8dca-merged.mount: Deactivated successfully.
Nov 29 05:10:46 compute-0 ceph-mgr[75473]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Nov 29 05:10:46 compute-0 podman[104439]: 2025-11-29 05:10:46.379764486 +0000 UTC m=+0.153056073 container remove b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 05:10:46 compute-0 systemd[1]: libpod-conmon-b35d832fc2a39af33768ba893f3011b3906b42a416816288bd3c915f41b9631f.scope: Deactivated successfully.
Nov 29 05:10:46 compute-0 podman[104481]: 2025-11-29 05:10:46.538472762 +0000 UTC m=+0.043844658 container create 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 05:10:46 compute-0 systemd[1]: Started libpod-conmon-7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3.scope.
Nov 29 05:10:46 compute-0 podman[104481]: 2025-11-29 05:10:46.517094076 +0000 UTC m=+0.022466022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fcef739eb4fbc09d23d1f5f08212bc798961ea4405d47f44aef72aa591d0c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fcef739eb4fbc09d23d1f5f08212bc798961ea4405d47f44aef72aa591d0c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fcef739eb4fbc09d23d1f5f08212bc798961ea4405d47f44aef72aa591d0c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1fcef739eb4fbc09d23d1f5f08212bc798961ea4405d47f44aef72aa591d0c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:46 compute-0 podman[104481]: 2025-11-29 05:10:46.643932917 +0000 UTC m=+0.149304913 container init 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:10:46 compute-0 podman[104481]: 2025-11-29 05:10:46.655934502 +0000 UTC m=+0.161306428 container start 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 05:10:46 compute-0 podman[104481]: 2025-11-29 05:10:46.659738182 +0000 UTC m=+0.165110108 container attach 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:10:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 29 05:10:46 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 05:10:47 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 29 05:10:47 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev d093d1d7-e900-4ac1-90ba-e4b9b7c58eeb (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=19/20 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 05:10:47 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=16/17 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-mon[75176]: pgmap v95: 73 pgs: 62 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 8.0 KiB/s wr, 394 op/s
Nov 29 05:10:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:47 compute-0 ceph-mon[75176]: osdmap e43: 3 total, 3 up, 3 in
Nov 29 05:10:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:47 compute-0 ceph-mon[75176]: osdmap e44: 3 total, 3 up, 3 in
Nov 29 05:10:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=43/44 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=41/44 n=0 ec=12/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=43/44 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=16/16 les/c/f=17/17/0 sis=43) [0] r=0 lpr=43 pi=[16,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v98: 135 pgs: 124 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 8.0 KiB/s wr, 394 op/s
Nov 29 05:10:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 05:10:47 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 05:10:47 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 05:10:47 compute-0 quirky_darwin[104498]: {
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:     "0": [
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:         {
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "devices": [
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "/dev/loop3"
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             ],
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_name": "ceph_lv0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_size": "21470642176",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "name": "ceph_lv0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "tags": {
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.crush_device_class": "",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.encrypted": "0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.osd_id": "0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.type": "block",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.vdo": "0"
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             },
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "type": "block",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "vg_name": "ceph_vg0"
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:         }
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:     ],
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:     "1": [
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:         {
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "devices": [
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "/dev/loop4"
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             ],
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_name": "ceph_lv1",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_size": "21470642176",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "name": "ceph_lv1",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "tags": {
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.crush_device_class": "",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.encrypted": "0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.osd_id": "1",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.type": "block",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.vdo": "0"
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             },
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "type": "block",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "vg_name": "ceph_vg1"
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:         }
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:     ],
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:     "2": [
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:         {
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "devices": [
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "/dev/loop5"
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             ],
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_name": "ceph_lv2",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_size": "21470642176",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "name": "ceph_lv2",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "tags": {
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.cluster_name": "ceph",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.crush_device_class": "",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.encrypted": "0",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.osd_id": "2",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.type": "block",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:                 "ceph.vdo": "0"
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             },
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "type": "block",
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:             "vg_name": "ceph_vg2"
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:         }
Nov 29 05:10:47 compute-0 quirky_darwin[104498]:     ]
Nov 29 05:10:47 compute-0 quirky_darwin[104498]: }
Nov 29 05:10:47 compute-0 systemd[1]: libpod-7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3.scope: Deactivated successfully.
Nov 29 05:10:47 compute-0 podman[104481]: 2025-11-29 05:10:47.43715957 +0000 UTC m=+0.942531476 container died 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1fcef739eb4fbc09d23d1f5f08212bc798961ea4405d47f44aef72aa591d0c1-merged.mount: Deactivated successfully.
Nov 29 05:10:47 compute-0 systemd[76809]: Starting Mark boot as successful...
Nov 29 05:10:47 compute-0 systemd[76809]: Finished Mark boot as successful.
Nov 29 05:10:47 compute-0 podman[104481]: 2025-11-29 05:10:47.497039317 +0000 UTC m=+1.002411233 container remove 7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 05:10:47 compute-0 systemd[1]: libpod-conmon-7e33b78e877ada320764a01ef841e6bdf65970fede55f938ea6b1f242dd2d8d3.scope: Deactivated successfully.
Nov 29 05:10:47 compute-0 sudo[104373]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:47 compute-0 sudo[104521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:47 compute-0 sudo[104521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:47 compute-0 sudo[104521]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:47 compute-0 sudo[104546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:10:47 compute-0 sudo[104546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:47 compute-0 sudo[104546]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:47 compute-0 sudo[104571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:47 compute-0 sudo[104571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:47 compute-0 sudo[104571]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:47 compute-0 sudo[104596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:10:47 compute-0 sudo[104596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 29 05:10:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 05:10:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 05:10:48 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 29 05:10:48 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 3d650303-1eb4-4605-9d73-51a2e3a81f60 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 05:10:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 05:10:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:48 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 45 pg[6.0( v 35'39 (0'0,35'39] local-lis/les=19/20 n=22 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=11.249977112s) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 31'38 mlcod 31'38 active pruub 81.319190979s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:10:48 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 45 pg[6.0( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=11.249977112s) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 31'38 mlcod 0'0 unknown pruub 81.319190979s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 05:10:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 05:10:48 compute-0 ceph-mon[75176]: osdmap e45: 3 total, 3 up, 3 in
Nov 29 05:10:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=41 pruub=13.363991737s) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active pruub 78.893814087s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 44 pg[3.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=41 pruub=13.363991737s) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown pruub 78.893814087s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=13.139037132s) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active pruub 78.675048828s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.2( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.3( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.4( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.5( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.6( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.7( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.8( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.a( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.b( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.c( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.d( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.f( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=13.139037132s) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown pruub 78.675048828s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.10( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.12( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.13( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.14( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.16( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.17( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.18( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.19( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1c( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1b( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1e( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 45 pg[3.1f( empty local-lis/les=14/15 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:48 compute-0 podman[104661]: 2025-11-29 05:10:48.188579013 +0000 UTC m=+0.060015882 container create 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:10:48 compute-0 systemd[1]: Started libpod-conmon-7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1.scope.
Nov 29 05:10:48 compute-0 podman[104661]: 2025-11-29 05:10:48.159502265 +0000 UTC m=+0.030939194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:48 compute-0 podman[104661]: 2025-11-29 05:10:48.273633035 +0000 UTC m=+0.145069914 container init 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:10:48 compute-0 podman[104661]: 2025-11-29 05:10:48.281811289 +0000 UTC m=+0.153248148 container start 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 05:10:48 compute-0 podman[104661]: 2025-11-29 05:10:48.285637219 +0000 UTC m=+0.157074118 container attach 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:10:48 compute-0 cranky_neumann[104677]: 167 167
Nov 29 05:10:48 compute-0 systemd[1]: libpod-7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1.scope: Deactivated successfully.
Nov 29 05:10:48 compute-0 conmon[104677]: conmon 7578d87071be9d4c39e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1.scope/container/memory.events
Nov 29 05:10:48 compute-0 podman[104661]: 2025-11-29 05:10:48.288226111 +0000 UTC m=+0.159662990 container died 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 05:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7033aef4372ca7c05096fc654e74b3470927c8819ec5485178867281cd8ab029-merged.mount: Deactivated successfully.
Nov 29 05:10:48 compute-0 podman[104661]: 2025-11-29 05:10:48.326513267 +0000 UTC m=+0.197950136 container remove 7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_neumann, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:48 compute-0 systemd[1]: libpod-conmon-7578d87071be9d4c39e674605cd4e8d5efe3a64a45b2f432d83b20da0a9ce9e1.scope: Deactivated successfully.
Nov 29 05:10:48 compute-0 podman[104702]: 2025-11-29 05:10:48.495922626 +0000 UTC m=+0.041487953 container create 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:10:48 compute-0 systemd[1]: Started libpod-conmon-0d132da35d11148d00723688123313615498899682f513961b851d37c8566772.scope.
Nov 29 05:10:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64bf112d2a0dad0fcd417cd3cbfcccc13ea86444512dfb598213e8a78ee16a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64bf112d2a0dad0fcd417cd3cbfcccc13ea86444512dfb598213e8a78ee16a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64bf112d2a0dad0fcd417cd3cbfcccc13ea86444512dfb598213e8a78ee16a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d64bf112d2a0dad0fcd417cd3cbfcccc13ea86444512dfb598213e8a78ee16a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:10:48 compute-0 podman[104702]: 2025-11-29 05:10:48.476138358 +0000 UTC m=+0.021703735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:10:48 compute-0 podman[104702]: 2025-11-29 05:10:48.580291242 +0000 UTC m=+0.125856599 container init 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:10:48 compute-0 podman[104702]: 2025-11-29 05:10:48.587095663 +0000 UTC m=+0.132661020 container start 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 29 05:10:48 compute-0 podman[104702]: 2025-11-29 05:10:48.593535406 +0000 UTC m=+0.139100753 container attach 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:10:48 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 29 05:10:48 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 29 05:10:48 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 29 05:10:48 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 29 05:10:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 29 05:10:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 05:10:49 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 29 05:10:49 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev d8761330-0c02-45a4-a3c5-bfa81a62a4af (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 05:10:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 05:10:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.a( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.5( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.9( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.4( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.8( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.7( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.6( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.2( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.f( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.e( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.c( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.d( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=19/20 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.0( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 31'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 46 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [0] r=0 lpr=45 pi=[19,45)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.2( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.0( empty local-lis/les=41/46 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.4( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.b( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=45/46 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.17( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.19( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [1] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=14/14 les/c/f=15/15/0 sis=41) [1] r=0 lpr=41 pi=[14,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:49 compute-0 ceph-mon[75176]: pgmap v98: 135 pgs: 124 unknown, 11 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 175 KiB/s rd, 8.0 KiB/s wr, 394 op/s
Nov 29 05:10:49 compute-0 ceph-mon[75176]: 4.1 scrub starts
Nov 29 05:10:49 compute-0 ceph-mon[75176]: 4.1 scrub ok
Nov 29 05:10:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:49 compute-0 ceph-mon[75176]: osdmap e46: 3 total, 3 up, 3 in
Nov 29 05:10:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v101: 181 pgs: 108 unknown, 73 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 05:10:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 05:10:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]: {
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "osd_id": 0,
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "type": "bluestore"
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:     },
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "osd_id": 1,
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "type": "bluestore"
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:     },
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "osd_id": 2,
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:         "type": "bluestore"
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]:     }
Nov 29 05:10:49 compute-0 awesome_archimedes[104719]: }
Nov 29 05:10:49 compute-0 systemd[1]: libpod-0d132da35d11148d00723688123313615498899682f513961b851d37c8566772.scope: Deactivated successfully.
Nov 29 05:10:49 compute-0 podman[104702]: 2025-11-29 05:10:49.604383928 +0000 UTC m=+1.149949295 container died 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:10:49 compute-0 systemd[1]: libpod-0d132da35d11148d00723688123313615498899682f513961b851d37c8566772.scope: Consumed 1.015s CPU time.
Nov 29 05:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d64bf112d2a0dad0fcd417cd3cbfcccc13ea86444512dfb598213e8a78ee16a5-merged.mount: Deactivated successfully.
Nov 29 05:10:49 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 29 05:10:49 compute-0 podman[104702]: 2025-11-29 05:10:49.680750145 +0000 UTC m=+1.226315482 container remove 0d132da35d11148d00723688123313615498899682f513961b851d37c8566772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:10:49 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 29 05:10:49 compute-0 systemd[1]: libpod-conmon-0d132da35d11148d00723688123313615498899682f513961b851d37c8566772.scope: Deactivated successfully.
Nov 29 05:10:49 compute-0 sudo[104596]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:10:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:10:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:49 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 8f0cd8a4-94f1-45b6-8750-2ddf16853e64 does not exist
Nov 29 05:10:49 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev cb8f6dad-bfed-4058-8b76-ab751d70ae38 does not exist
Nov 29 05:10:49 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 29 05:10:49 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 29 05:10:49 compute-0 sudo[104766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:10:49 compute-0 sudo[104766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:49 compute-0 sudo[104766]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:49 compute-0 sudo[104791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:10:49 compute-0 sudo[104791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:10:49 compute-0 sudo[104791]: pam_unix(sudo:session): session closed for user root
Nov 29 05:10:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 29 05:10:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 05:10:50 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 29 05:10:50 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev 563f6d09-9437-473a-957b-26b842c824c9 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 05:10:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 05:10:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:50 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 47 pg[8.0( v 31'4 (0'0,31'4] local-lis/les=30/31 n=4 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47 pruub=14.824311256s) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 31'3 mlcod 31'3 active pruub 82.229804993s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:10:50 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 47 pg[9.0( v 38'583 (0'0,38'583] local-lis/les=32/33 n=209 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=8.842782021s) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 38'582 mlcod 38'582 active pruub 76.248367310s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:10:50 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 47 pg[8.0( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47 pruub=14.824311256s) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 31'3 mlcod 0'0 unknown pruub 82.229804993s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:50 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 47 pg[9.0( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47 pruub=8.842782021s) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 38'582 mlcod 0'0 unknown pruub 76.248367310s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:50 compute-0 ceph-mon[75176]: 2.1 scrub starts
Nov 29 05:10:50 compute-0 ceph-mon[75176]: 2.1 scrub ok
Nov 29 05:10:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:50 compute-0 ceph-mon[75176]: 4.2 scrub starts
Nov 29 05:10:50 compute-0 ceph-mon[75176]: 4.2 scrub ok
Nov 29 05:10:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:50 compute-0 ceph-mon[75176]: osdmap e47: 3 total, 3 up, 3 in
Nov 29 05:10:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 05:10:50 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 29 05:10:50 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 29 05:10:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 29 05:10:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 05:10:51 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] update: starting ev fa9a4006-7c10-453c-b204-8f1af395116a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev b54068fd-06f2-486d-9164-d647b988f2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event b54068fd-06f2-486d-9164-d647b988f2c7 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 09d9423c-3037-46fc-8fab-602c085244a8 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.15( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.14( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.14( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.15( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.17( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 09d9423c-3037-46fc-8fab-602c085244a8 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.16( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 60b47820-dec1-4bfb-aa36-6ff5c734a866 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 60b47820-dec1-4bfb-aa36-6ff5c734a866 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 24cf8aab-88f0-41ac-9c03-50177383e1e1 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 24cf8aab-88f0-41ac-9c03-50177383e1e1 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 40938a28-7d35-4d1d-acc7-268a3723f906 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 40938a28-7d35-4d1d-acc7-268a3723f906 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev d093d1d7-e900-4ac1-90ba-e4b9b7c58eeb (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event d093d1d7-e900-4ac1-90ba-e4b9b7c58eeb (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 3d650303-1eb4-4605-9d73-51a2e3a81f60 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 3d650303-1eb4-4605-9d73-51a2e3a81f60 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev d8761330-0c02-45a4-a3c5-bfa81a62a4af (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event d8761330-0c02-45a4-a3c5-bfa81a62a4af (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev 563f6d09-9437-473a-957b-26b842c824c9 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 563f6d09-9437-473a-957b-26b842c824c9 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] complete: finished ev fa9a4006-7c10-453c-b204-8f1af395116a (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event fa9a4006-7c10-453c-b204-8f1af395116a (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.10( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.17( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.16( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.11( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.10( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.13( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.12( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.11( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.13( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.12( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.d( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.c( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.c( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.e( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.f( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.8( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.d( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.9( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.a( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.b( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.3( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.2( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1( v 31'4 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.f( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.e( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.b( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.a( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.8( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.9( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.2( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.3( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.6( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.7( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.6( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.7( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.4( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.5( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.5( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.4( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1b( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1a( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1b( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.19( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.18( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1a( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.18( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1f( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1e( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1f( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1d( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1e( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.19( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1c( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1d( v 38'583 lc 0'0 (0'0,38'583] local-lis/les=32/33 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1c( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=30/31 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.14( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.16( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.17( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.10( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.13( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.12( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.8( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.0( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=30/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 31'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.0( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 38'582 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.2( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.a( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.3( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.7( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.5( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.4( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.19( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1a( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=32/32 les/c/f=33/33/0 sis=47) [1] r=0 lpr=47 pi=[32,47)/1 crt=38'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 48 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=30/30 les/c/f=31/31/0 sis=47) [1] r=0 lpr=47 pi=[30,47)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:51 compute-0 ceph-mon[75176]: pgmap v101: 181 pgs: 108 unknown, 73 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:51 compute-0 ceph-mon[75176]: 3.1 scrub starts
Nov 29 05:10:51 compute-0 ceph-mon[75176]: 3.1 scrub ok
Nov 29 05:10:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 05:10:51 compute-0 ceph-mon[75176]: osdmap e48: 3 total, 3 up, 3 in
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v104: 243 pgs: 62 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 05:10:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 05:10:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:51 compute-0 ceph-mgr[75473]: [progress INFO root] Writing back 15 completed events
Nov 29 05:10:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 05:10:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 29 05:10:52 compute-0 ceph-mon[75176]: 3.2 scrub starts
Nov 29 05:10:52 compute-0 ceph-mon[75176]: 3.2 scrub ok
Nov 29 05:10:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 05:10:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:10:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 05:10:52 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 29 05:10:52 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=10.828987122s) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active pruub 80.300170898s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:10:52 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49 pruub=10.828987122s) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown pruub 80.300170898s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:52 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 29 05:10:52 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 29 05:10:52 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 29 05:10:52 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 29 05:10:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 29 05:10:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 05:10:53 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 29 05:10:53 compute-0 ceph-mon[75176]: pgmap v104: 243 pgs: 62 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 05:10:53 compute-0 ceph-mon[75176]: osdmap e49: 3 total, 3 up, 3 in
Nov 29 05:10:53 compute-0 ceph-mon[75176]: 4.3 scrub starts
Nov 29 05:10:53 compute-0 ceph-mon[75176]: 4.3 scrub ok
Nov 29 05:10:53 compute-0 ceph-mon[75176]: 5.1 scrub starts
Nov 29 05:10:53 compute-0 ceph-mon[75176]: 5.1 scrub ok
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=36/37 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.16( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.13( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.0( empty local-lis/les=49/50 n=0 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.5( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.7( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.1d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 50 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=36/36 les/c/f=37/37/0 sis=49) [1] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v107: 305 pgs: 124 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:53 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 29 05:10:53 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 29 05:10:54 compute-0 ceph-mon[75176]: osdmap e50: 3 total, 3 up, 3 in
Nov 29 05:10:54 compute-0 ceph-mon[75176]: pgmap v107: 305 pgs: 124 unknown, 181 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 49 pg[10.0( v 35'16 (0'0,35'16] local-lis/les=34/35 n=8 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=14.534220695s) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 35'15 mlcod 35'15 active pruub 81.059791565s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.0( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=14.534220695s) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 35'15 mlcod 0'0 unknown pruub 81.059791565s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.7( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.8( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.9( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.a( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.b( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.c( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.2( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.3( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.4( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.5( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.6( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.d( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.f( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.10( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.e( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.11( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.12( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.13( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.14( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.15( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.16( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.17( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.18( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.19( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1a( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1b( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1c( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1d( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1e( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 50 pg[10.1f( v 35'16 lc 0'0 (0'0,35'16] local-lis/les=34/35 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:10:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 29 05:10:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 05:10:55 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 29 05:10:55 compute-0 ceph-mon[75176]: 2.2 scrub starts
Nov 29 05:10:55 compute-0 ceph-mon[75176]: 2.2 scrub ok
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1f( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1d( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1b( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1c( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.18( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.3( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.5( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.0( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 35'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.a( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.c( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.e( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.d( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.15( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.14( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.9( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 51 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [2] r=0 lpr=49 pi=[34,49)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:10:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v109: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:55 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Nov 29 05:10:55 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Nov 29 05:10:55 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 29 05:10:55 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 29 05:10:56 compute-0 ceph-mon[75176]: osdmap e51: 3 total, 3 up, 3 in
Nov 29 05:10:56 compute-0 ceph-mon[75176]: pgmap v109: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:56 compute-0 ceph-mon[75176]: 4.4 deep-scrub starts
Nov 29 05:10:56 compute-0 ceph-mon[75176]: 4.4 deep-scrub ok
Nov 29 05:10:56 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.2 deep-scrub starts
Nov 29 05:10:56 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.2 deep-scrub ok
Nov 29 05:10:57 compute-0 ceph-mon[75176]: 3.3 scrub starts
Nov 29 05:10:57 compute-0 ceph-mon[75176]: 3.3 scrub ok
Nov 29 05:10:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:57 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 29 05:10:57 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 29 05:10:58 compute-0 ceph-mon[75176]: 5.2 deep-scrub starts
Nov 29 05:10:58 compute-0 ceph-mon[75176]: 5.2 deep-scrub ok
Nov 29 05:10:58 compute-0 ceph-mon[75176]: pgmap v110: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:10:58 compute-0 ceph-mon[75176]: 4.5 scrub starts
Nov 29 05:10:58 compute-0 ceph-mon[75176]: 4.5 scrub ok
Nov 29 05:10:58 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 29 05:10:58 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 29 05:10:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:10:59 compute-0 ceph-mon[75176]: 4.6 scrub starts
Nov 29 05:10:59 compute-0 ceph-mon[75176]: 4.6 scrub ok
Nov 29 05:10:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:00 compute-0 ceph-mon[75176]: pgmap v111: 305 pgs: 32 peering, 273 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:00 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 29 05:11:00 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 29 05:11:00 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 29 05:11:00 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 29 05:11:01 compute-0 ceph-mon[75176]: 4.7 scrub starts
Nov 29 05:11:01 compute-0 ceph-mon[75176]: 4.7 scrub ok
Nov 29 05:11:01 compute-0 ceph-mon[75176]: 2.3 scrub starts
Nov 29 05:11:01 compute-0 ceph-mon[75176]: 2.3 scrub ok
Nov 29 05:11:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 05:11:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 05:11:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 05:11:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 05:11:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 05:11:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 05:11:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 05:11:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 05:11:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 05:11:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 05:11:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 05:11:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 05:11:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:01 compute-0 ceph-mgr[75473]: [progress INFO root] Completed event 8a19af1e-d04e-4eb0-90ab-4fa888746f41 (Global Recovery Event) in 15 seconds
Nov 29 05:11:01 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 29 05:11:01 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 29 05:11:01 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 29 05:11:01 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 29 05:11:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 29 05:11:02 compute-0 ceph-mon[75176]: pgmap v112: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 05:11:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 05:11:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:11:02 compute-0 ceph-mon[75176]: 4.8 scrub starts
Nov 29 05:11:02 compute-0 ceph-mon[75176]: 4.8 scrub ok
Nov 29 05:11:02 compute-0 ceph-mon[75176]: 5.3 scrub starts
Nov 29 05:11:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 05:11:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 05:11:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 05:11:02 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806700706s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090400696s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.800364494s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084068298s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.800285339s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084022522s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806643486s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090400696s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.800265312s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084068298s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799758911s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084022522s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799633026s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084075928s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806384087s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090850830s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799608231s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084075928s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806331635s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090850830s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799503326s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084098816s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799499512s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084091187s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799484253s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084098816s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799433708s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084091187s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805849075s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090591431s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799783707s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084541321s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805829048s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090591431s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799480438s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084320068s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799401283s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084320068s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799250603s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084259033s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799760818s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084541321s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799201965s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084259033s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799141884s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084251404s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799116135s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084251404s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806241035s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.091476440s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805368423s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090606689s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799041748s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084297180s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805338860s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090606689s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799015999s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084297180s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799060822s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084350586s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.806206703s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.091476440s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.799014091s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084350586s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805151939s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090728760s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798839569s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084434509s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798835754s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084472656s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798818588s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084434509s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805104256s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090728760s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798794746s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084472656s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805103302s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090866089s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.804840088s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090591431s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.805082321s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090866089s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.804779053s) [1] r=-1 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090591431s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798666000s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084533691s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798616409s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084526062s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798596382s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084526062s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.18( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798618317s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084533691s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798488617s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084510803s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798519135s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084556580s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.1b( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798500061s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084556580s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798476219s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084548950s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798441887s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084510803s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.1a( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798392296s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084541321s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798430443s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084548950s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798376083s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084541321s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.e( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798214912s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 93.084579468s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.1( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.798124313s) [2] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 93.084579468s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.a( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.13( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.11( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[4.1c( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908418655s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.312683105s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788142204s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.192420959s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908383369s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.312683105s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788107872s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.192420959s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.789016724s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193420410s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912281036s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.316673279s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788949013s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193420410s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912191391s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.316673279s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.781491280s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.186050415s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912093163s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.316673279s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.788057327s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.192665100s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912058830s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.316673279s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787798882s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.192436218s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.781434059s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.186050415s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.788005829s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.192665100s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787751198s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.192436218s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912031174s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.316795349s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.912009239s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.316795349s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787770271s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.192581177s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787744522s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.192581177s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787699699s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.192657471s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788142204s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193099976s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788173676s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193176270s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787672043s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.192657471s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788096428s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193099976s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788153648s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193176270s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788089752s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193099976s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788050652s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193099976s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787918091s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193084717s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787858009s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193092346s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787856102s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193084717s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911723137s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.316970825s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787837982s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193092346s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911678314s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.316970825s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787822723s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193183899s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911528587s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.316932678s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911496162s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.316932678s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787911415s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193290710s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787763596s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193183899s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787771225s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193290710s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787641525s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193283081s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787619591s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193283081s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787540436s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193290710s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787517548s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193290710s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911201477s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317001343s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911181450s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317001343s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911158562s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317001343s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911150932s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317001343s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787746429s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193634033s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787750244s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193672180s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787727356s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193634033s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787703514s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193672180s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.911005020s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317024231s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.910983086s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317024231s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787546158s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193656921s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787524223s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193656921s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788177490s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194313049s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787768364s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.193992615s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788104057s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194313049s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787747383s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.193992615s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.910818100s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317085266s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.910787582s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317047119s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.910778046s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317085266s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.910731316s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317047119s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787279129s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194007874s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787255287s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194007874s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.10( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787128448s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194007874s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787140846s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194061279s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787103653s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194007874s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787117958s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194061279s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.787117004s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194061279s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787326813s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194381714s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787071228s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194061279s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909996033s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317077637s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787307739s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194381714s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909976959s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317077637s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787222862s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194442749s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787198067s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194442749s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.9( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909942627s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 83.317192078s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786861420s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194168091s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786796570s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194129944s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.9( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909870148s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 83.317192078s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786819458s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194198608s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786805153s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194168091s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786797523s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194198608s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786253929s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194152832s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786315918s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194267273s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.11( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786208153s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194152832s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786292076s) [0] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194267273s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.e( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909038544s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 83.317123413s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786027908s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194129944s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.e( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.909008026s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 83.317123413s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786204338s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194374084s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.d( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908976555s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 83.317146301s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786079407s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194305420s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786164284s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194374084s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.786061287s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194305420s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.d( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908883095s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 83.317146301s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908768654s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317115784s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.17( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908626556s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317115784s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908273697s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317153931s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.790143967s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.199142456s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908158302s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317161560s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=49/51 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.908170700s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317153931s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.12( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785473824s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194641113s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.13( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785130501s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194549561s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.14( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.907771111s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 83.317192078s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785049438s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194549561s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.14( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.907659531s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 83.317192078s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.15( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785076141s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194725037s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785057068s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194725037s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.789924622s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.199142456s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.907494545s) [1] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317161560s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.785320282s) [1] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194641113s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.15( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.907355309s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 35'16 active pruub 83.317176819s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788032532s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.197891235s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.788012505s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.197891235s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.15( v 51'17 (0'0,51'17] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.907303810s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 35'16 mlcod 0'0 unknown NOTIFY pruub 83.317176819s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.12( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.784483910s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.194801331s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.906848907s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317192078s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787620544s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.197975159s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.906826973s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317192078s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787576675s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.197975159s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.14( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.1a( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.783991814s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.194801331s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.906202316s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active pruub 83.317207336s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=49/51 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52 pruub=8.906173706s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.317207336s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786719322s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 83.197891235s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=41/44 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52 pruub=8.786702156s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.197891235s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787779808s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active pruub 83.199142456s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.16( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.1e( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.19( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.18( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.8( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.16( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.13( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.14( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52 pruub=8.787741661s) [1] r=-1 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 83.199142456s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.6( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.9( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.11( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.9( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.d( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.15( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.f( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.7( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.5( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.7( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.d( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.f( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.4( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.4( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.b( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[4.2( empty local-lis/les=0/0 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.8( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.7( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.8( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.2( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.5( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.11( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.4( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.10( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.12( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.9( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.3( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[5.2( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.1d( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.857205391s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.486763000s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.857173920s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.486763000s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.788203239s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417816162s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.788167953s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417816162s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.788183212s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417869568s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.788143158s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417869568s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.787726402s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417816162s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.793228149s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423332214s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.787698746s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417816162s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.793208122s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423332214s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.787457466s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417800903s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.787395477s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417800903s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.860441208s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.490974426s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.860420227s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.490974426s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792739868s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423271179s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.787227631s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417892456s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792643547s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423271179s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792794228s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423484802s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.787177086s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417892456s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792774200s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423484802s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.783559799s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414535522s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.783502579s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414535522s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.783287048s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414505005s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.783268929s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414505005s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.860280991s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491096497s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791881561s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423355103s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791820526s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423355103s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791843414s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423469543s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.859481812s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491096497s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791821480s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423469543s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.d( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.1( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.1c( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791752815s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423721313s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.15( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.1e( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791732788s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423721313s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.859150887s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491172791s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.858891487s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.490982056s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.859103203s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491172791s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.858875275s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.490982056s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.785293579s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417884827s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.1d( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.785241127s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417884827s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790006638s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.422691345s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789958000s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.422691345s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.1a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.15( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790723801s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423789978s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790676117s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423789978s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790492058s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423561096s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790179253s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423561096s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.780909538s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414421082s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.858075142s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491607666s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.17( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[2.1f( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[10.16( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.858024597s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491607666s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.780848503s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414421082s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.780668259s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414497375s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.780625343s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414497375s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.857177734s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491127014s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.780667305s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414680481s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.857131004s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491127014s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.780526161s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414421082s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.780619621s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414680481s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.780302048s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414421082s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789671898s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423812866s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789633751s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423812866s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789542198s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423782349s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.856815338s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491149902s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.856791496s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491149902s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789495468s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423782349s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.794953346s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.429428101s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.794935226s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429428101s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.779895782s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414398193s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.779864311s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414398193s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.779850960s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414413452s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.856511116s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491149902s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.779797554s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414413452s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.779530525s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414245605s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.856465340s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491149902s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.779503822s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414245605s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.779339790s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414222717s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788940430s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.423843384s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.779294968s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414222717s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789008141s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423973083s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.e( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788898468s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423843384s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788973808s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423973083s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.15( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.855963707s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491157532s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.7( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.1d( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.855909348s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491157532s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.c( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.11( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.777730942s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414184570s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.777702332s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414184570s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.17( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786898613s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.423866272s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786866188s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.423866272s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.853936195s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491325378s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.1b( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.776669502s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414184570s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.853899002s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491325378s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.776618958s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414184570s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786219597s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.424095154s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.776266098s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414169312s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786177635s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.424095154s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.12( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.776092529s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414169312s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.776009560s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.414199829s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.775964737s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.414199829s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.775331497s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413932800s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.775288582s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413932800s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.775119781s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413925171s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.775060654s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413917542s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.775067329s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413925171s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.775020599s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413917542s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.852431297s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491348267s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.852385521s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491348267s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.12( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.18( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.14( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.774798393s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413917542s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790207863s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.429458618s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.852411270s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491279602s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790445328s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.429710388s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790128708s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429458618s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.774775505s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413917542s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790320396s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429710388s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.852021217s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491462708s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.851974487s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491462708s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789714813s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.429512024s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.773963928s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413764954s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789690971s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429512024s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.773869514s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413742065s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.773899078s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413764954s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.773812294s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413742065s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.1c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.7( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789587021s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.429718018s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789566040s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429718018s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.18( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.851199150s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491607666s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.851178169s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491607666s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.851904869s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491279602s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.772701263s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413314819s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.772662163s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413314819s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.850867271s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491615295s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789785385s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.430534363s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789736748s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430534363s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788850784s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.429687500s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.2( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.772235870s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413154602s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.772204399s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413154602s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.850845337s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491615295s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.772057533s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413116455s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.772015572s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413116455s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.f( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788815498s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.429687500s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.d( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.771298409s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413116455s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.849765778s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491615295s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.771262169s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413116455s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.1f( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.849740982s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491615295s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788393974s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.430351257s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.789003372s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.430984497s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.771003723s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413032532s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788945198s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430984497s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788334846s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430351257s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770841599s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413032532s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770813942s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413032532s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770620346s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.413024902s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788051605s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.430656433s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.770444870s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413032532s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=47/48 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.788022995s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430656433s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.1b( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770407677s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.413024902s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.4( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.848637581s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491645813s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.848608971s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491645813s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.787676811s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.430938721s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.787578583s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430938721s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.5( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.1( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.10( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786602020s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.430664062s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.14( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.5( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1b( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786562920s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430664062s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786143303s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.430664062s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786389351s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.430931091s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.768260002s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412811279s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786114693s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430664062s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.786360741s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430931091s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.768213272s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412811279s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.1f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.767839432s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412712097s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.767819405s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412712097s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.846790314s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491699219s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.846781731s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491722107s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.846744537s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491699219s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.846710205s) [0] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491722107s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.11( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.10( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.3( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.1( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.f( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.3( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.c( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.e( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.766963005s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412788391s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.6( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.766901970s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412788391s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792809486s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.439002991s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.792767525s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.439002991s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.784534454s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.430953979s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.784504890s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.430953979s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.784186363s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.431022644s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.784163475s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.431022644s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844812393s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491706848s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844770432s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491706848s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844600677s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491706848s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.783916473s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.431121826s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844511986s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491706848s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.6( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.783894539s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.431121826s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.764976501s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412239075s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.e( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.764951706s) [2] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412239075s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770415306s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.417892456s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.770350456s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.417892456s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844104767s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491714478s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.844085693s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491714478s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791116714s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.438949585s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.764348984s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412170410s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.791099548s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.438949585s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.764307022s) [2] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412170410s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.754925728s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active pruub 90.403030396s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.843610764s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active pruub 94.491744995s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52 pruub=10.754908562s) [0] r=-1 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.403030396s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790827751s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 92.438980103s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=49/50 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52 pruub=14.843582153s) [2] r=-1 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 94.491744995s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790781975s) [0] r=-1 lpr=52 pi=[47,52)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.438980103s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.763880730s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active pruub 90.412307739s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=41/46 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52 pruub=10.763862610s) [0] r=-1 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.412307739s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790631294s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active pruub 92.439140320s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.d( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=47/48 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52 pruub=12.790570259s) [2] r=-1 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 92.439140320s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.9( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.5( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.8( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.2( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.3( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.9( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.14( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.3( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.8( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.2( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.8( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.2( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.e( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.4( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.18( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.a( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.1a( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[2.1b( empty local-lis/les=0/0 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.1b( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.11( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.1a( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.18( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.15( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.1( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.1c( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[10.13( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.1b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.a( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.1e( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[7.11( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[3.16( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 52 pg[5.19( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.b( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.9( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.1( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.6( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.f( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.9( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.4( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.4( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[11.1f( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.c( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.9( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.6( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.6( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.f( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.1a( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[11.19( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.12( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.18( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.1f( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.15( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[8.1d( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 52 pg[8.1c( empty local-lis/les=0/0 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[7.13( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:02 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 52 pg[3.17( empty local-lis/les=0/0 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 29 05:11:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 05:11:03 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 29 05:11:03 compute-0 ceph-mon[75176]: 5.3 scrub ok
Nov 29 05:11:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 05:11:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 05:11:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:11:03 compute-0 ceph-mon[75176]: osdmap e52: 3 total, 3 up, 3 in
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.16( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.15( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.11( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.2( v 31'4 (0'0,31'4] local-lis/les=52/53 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.d( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.4( v 31'4 (0'0,31'4] local-lis/les=52/53 n=1 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.1b( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.1c( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[8.12( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [2] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [2] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.6( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.b( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.2( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.f( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.13( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.1( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.d( v 51'17 lc 35'9 (0'0,51'17] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [2] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.14( v 51'17 lc 35'13 (0'0,51'17] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.17( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.e( v 51'17 lc 35'7 (0'0,51'17] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.7( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.15( v 51'17 lc 35'5 (0'0,51'17] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.8( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.9( v 51'17 lc 35'15 (0'0,51'17] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=51'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.1e( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[10.4( v 35'16 (0'0,35'16] local-lis/les=52/53 n=1 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.10( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.b( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.14( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.c( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.e( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.9( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.12( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.19( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.11( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.1a( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.f( v 35'39 lc 31'1 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.5( v 35'39 lc 31'11 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.7( v 35'39 lc 31'21 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[10.10( v 35'16 (0'0,35'16] local-lis/les=52/53 n=0 ec=49/34 lis/c=49/49 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[49,52)/1 crt=35'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=52/53 n=0 ec=41/12 lis/c=41/41 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=52/53 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=52/53 n=0 ec=43/16 lis/c=43/43 les/c/f=44/44/0 sis=52) [1] r=0 lpr=52 pi=[43,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 53 pg[6.d( v 35'39 lc 31'13 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[45,52)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.6( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.f( v 31'4 lc 0'0 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=52/53 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[45,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.1d( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.18( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=52/53 n=0 ec=41/14 lis/c=41/41 les/c/f=46/46/0 sis=52) [0] r=0 lpr=52 pi=[41,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.1f( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[8.1a( v 31'4 (0'0,31'4] local-lis/les=52/53 n=0 ec=47/30 lis/c=47/47 les/c/f=48/48/0 sis=52) [0] r=0 lpr=52 pi=[47,52)/1 crt=31'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=52/53 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=52) [0] r=0 lpr=52 pi=[49,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v115: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 05:11:03 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 05:11:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 05:11:03 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 05:11:03 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 29 05:11:03 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 29 05:11:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 29 05:11:04 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 05:11:04 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 05:11:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 05:11:04 compute-0 ceph-mon[75176]: osdmap e53: 3 total, 3 up, 3 in
Nov 29 05:11:04 compute-0 ceph-mon[75176]: pgmap v115: 305 pgs: 305 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:04 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 05:11:04 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 05:11:04 compute-0 ceph-mon[75176]: 5.6 scrub starts
Nov 29 05:11:04 compute-0 ceph-mon[75176]: 5.6 scrub ok
Nov 29 05:11:04 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 29 05:11:04 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.365850449s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.083297729s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:04 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.365795135s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.083297729s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:04 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372951508s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090682983s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:04 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372912407s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090682983s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:04 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372849464s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090759277s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:04 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372788429s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 95.090705872s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:04 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372824669s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090759277s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:04 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 54 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=8.372759819s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 95.090705872s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:04 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 54 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[47,53)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 29 05:11:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 05:11:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 05:11:05 compute-0 ceph-mon[75176]: osdmap e54: 3 total, 3 up, 3 in
Nov 29 05:11:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 05:11:05 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.389012337s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.040473938s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.388888359s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.040473938s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.395154953s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047615051s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.387639999s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.040473938s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.387569427s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.040473938s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.394632339s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047538757s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.394430161s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047538757s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.394090652s) [0] async=[0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047500610s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.394824028s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047615051s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55 pruub=15.393635750s) [0] r=-1 lpr=55 pi=[47,55)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047500610s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:05 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:05 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:05 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:05 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:05 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:05 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=54/55 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[6.6( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=54/55 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[6.e( v 35'39 lc 31'19 (0'0,35'39] local-lis/les=54/55 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:05 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:05 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 55 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:05 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:05 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:05 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 55 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 16 remapped+peering, 289 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 545 B/s, 2 keys/s, 4 objects/s recovering
Nov 29 05:11:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 29 05:11:06 compute-0 ceph-mon[75176]: osdmap e55: 3 total, 3 up, 3 in
Nov 29 05:11:06 compute-0 ceph-mon[75176]: pgmap v118: 305 pgs: 16 remapped+peering, 289 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 545 B/s, 2 keys/s, 4 objects/s recovering
Nov 29 05:11:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 05:11:06 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376647949s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.048171997s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376476288s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.048027039s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376409531s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.048027039s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376550674s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.048171997s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376393318s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.048110962s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.376317978s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.048110962s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375818253s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047706604s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375753403s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047706604s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375701904s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047813416s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375654221s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047813416s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375581741s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047927856s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375538826s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047912598s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375473976s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047912598s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375487328s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047927856s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.375023842s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047935486s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.374944687s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047935486s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.374961853s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.048049927s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.374711990s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047805786s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.374919891s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.048049927s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=53/54 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.374621391s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047805786s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.372932434s) [0] async=[0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 98.047706604s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:06 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 56 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=53/54 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56 pruub=14.372808456s) [0] r=-1 lpr=56 pi=[47,56)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 98.047706604s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.1( v 38'583 (0'0,38'583] local-lis/les=55/56 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:06 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 56 pg[9.1b( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:06 compute-0 ceph-mgr[75473]: [progress INFO root] Writing back 16 completed events
Nov 29 05:11:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 05:11:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:11:06 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 29 05:11:06 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 29 05:11:06 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 29 05:11:06 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 29 05:11:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 29 05:11:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 16 remapped+peering, 289 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 537 B/s, 2 keys/s, 4 objects/s recovering
Nov 29 05:11:07 compute-0 ceph-mon[75176]: osdmap e56: 3 total, 3 up, 3 in
Nov 29 05:11:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:11:07 compute-0 ceph-mon[75176]: 4.b scrub starts
Nov 29 05:11:07 compute-0 ceph-mon[75176]: 4.b scrub ok
Nov 29 05:11:07 compute-0 ceph-mon[75176]: 5.8 scrub starts
Nov 29 05:11:07 compute-0 ceph-mon[75176]: 5.8 scrub ok
Nov 29 05:11:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 05:11:07 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 29 05:11:07 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:07 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.b( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:07 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.5( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:07 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:07 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.9( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:07 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.11( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:07 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:07 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.3( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:07 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.d( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:07 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.1d( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:07 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 57 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=53/47 les/c/f=54/48/0 sis=56) [0] r=0 lpr=56 pi=[47,56)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:07 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 29 05:11:07 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 29 05:11:07 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Nov 29 05:11:07 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Nov 29 05:11:08 compute-0 ceph-mon[75176]: pgmap v120: 305 pgs: 16 remapped+peering, 289 active+clean; 456 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 537 B/s, 2 keys/s, 4 objects/s recovering
Nov 29 05:11:08 compute-0 ceph-mon[75176]: osdmap e57: 3 total, 3 up, 3 in
Nov 29 05:11:08 compute-0 ceph-mon[75176]: 4.c scrub starts
Nov 29 05:11:08 compute-0 ceph-mon[75176]: 4.c scrub ok
Nov 29 05:11:08 compute-0 ceph-mon[75176]: 2.c deep-scrub starts
Nov 29 05:11:08 compute-0 ceph-mon[75176]: 2.c deep-scrub ok
Nov 29 05:11:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 16 remapped+peering, 289 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 433 B/s, 1 keys/s, 4 objects/s recovering
Nov 29 05:11:09 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Nov 29 05:11:09 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Nov 29 05:11:09 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 29 05:11:09 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 29 05:11:10 compute-0 ceph-mon[75176]: pgmap v122: 305 pgs: 16 remapped+peering, 289 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 433 B/s, 1 keys/s, 4 objects/s recovering
Nov 29 05:11:10 compute-0 ceph-mon[75176]: 4.15 scrub starts
Nov 29 05:11:10 compute-0 ceph-mon[75176]: 4.15 scrub ok
Nov 29 05:11:10 compute-0 ceph-mon[75176]: 5.a scrub starts
Nov 29 05:11:10 compute-0 ceph-mon[75176]: 5.a scrub ok
Nov 29 05:11:10 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 29 05:11:10 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 29 05:11:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 372 B/s, 17 objects/s recovering
Nov 29 05:11:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 05:11:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 05:11:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 05:11:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 05:11:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 29 05:11:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 05:11:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 05:11:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 05:11:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 05:11:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 05:11:11 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 29 05:11:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:11:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:11:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:11:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:11:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:11:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:11:11 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599431038s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 35'39 active pruub 104.636581421s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:11 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599365234s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 104.636581421s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:11 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599466324s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 35'39 active pruub 104.636764526s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:11 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599435806s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 104.636764526s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:11 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:11 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599071503s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 35'39 active pruub 104.636917114s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:11 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.599032402s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 35'39 active pruub 104.637062073s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:11 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.598900795s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 104.637062073s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:11 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 58 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.598788261s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 104.636917114s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:11 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:11 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:11 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 29 05:11:12 compute-0 ceph-mon[75176]: 3.4 scrub starts
Nov 29 05:11:12 compute-0 ceph-mon[75176]: 3.4 scrub ok
Nov 29 05:11:12 compute-0 ceph-mon[75176]: pgmap v123: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 372 B/s, 17 objects/s recovering
Nov 29 05:11:12 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 05:11:12 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 05:11:12 compute-0 ceph-mon[75176]: osdmap e58: 3 total, 3 up, 3 in
Nov 29 05:11:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 05:11:12 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 29 05:11:12 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 59 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:12 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 59 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=58/59 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:12 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 59 pg[6.f( v 35'39 lc 31'1 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:12 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 59 pg[6.7( v 35'39 lc 31'21 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:12 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 29 05:11:12 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 29 05:11:12 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Nov 29 05:11:12 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Nov 29 05:11:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 373 B/s, 17 objects/s recovering
Nov 29 05:11:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 29 05:11:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 05:11:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 29 05:11:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 05:11:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 29 05:11:13 compute-0 ceph-mon[75176]: osdmap e59: 3 total, 3 up, 3 in
Nov 29 05:11:13 compute-0 ceph-mon[75176]: 2.e scrub starts
Nov 29 05:11:13 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 05:11:13 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 05:11:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 05:11:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 05:11:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 05:11:13 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 29 05:11:13 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 60 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=15.258614540s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 111.090835571s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:13 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 60 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=45/46 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=15.258539200s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 111.090835571s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:13 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 60 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=15.258361816s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 111.091163635s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:13 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 60 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=45/46 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60 pruub=15.258294106s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 111.091163635s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:13 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:13 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 29 05:11:14 compute-0 ceph-mon[75176]: 2.e scrub ok
Nov 29 05:11:14 compute-0 ceph-mon[75176]: 3.b deep-scrub starts
Nov 29 05:11:14 compute-0 ceph-mon[75176]: 3.b deep-scrub ok
Nov 29 05:11:14 compute-0 ceph-mon[75176]: pgmap v126: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 373 B/s, 17 objects/s recovering
Nov 29 05:11:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 05:11:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 05:11:14 compute-0 ceph-mon[75176]: osdmap e60: 3 total, 3 up, 3 in
Nov 29 05:11:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 05:11:14 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 29 05:11:14 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 61 pg[6.4( v 35'39 lc 31'15 (0'0,35'39] local-lis/les=60/61 n=2 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:14 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 61 pg[6.c( v 35'39 lc 31'17 (0'0,35'39] local-lis/les=60/61 n=1 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:14 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 29 05:11:14 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 29 05:11:14 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 29 05:11:14 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 29 05:11:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Nov 29 05:11:15 compute-0 ceph-mon[75176]: osdmap e61: 3 total, 3 up, 3 in
Nov 29 05:11:15 compute-0 ceph-mon[75176]: 4.16 scrub starts
Nov 29 05:11:15 compute-0 ceph-mon[75176]: 4.16 scrub ok
Nov 29 05:11:15 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 29 05:11:15 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 29 05:11:16 compute-0 ceph-mon[75176]: 3.d scrub starts
Nov 29 05:11:16 compute-0 ceph-mon[75176]: 3.d scrub ok
Nov 29 05:11:16 compute-0 ceph-mon[75176]: pgmap v129: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Nov 29 05:11:16 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 29 05:11:16 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 29 05:11:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 1 keys/s, 1 objects/s recovering
Nov 29 05:11:17 compute-0 ceph-mon[75176]: 3.10 scrub starts
Nov 29 05:11:17 compute-0 ceph-mon[75176]: 3.10 scrub ok
Nov 29 05:11:17 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 29 05:11:17 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 29 05:11:18 compute-0 ceph-mon[75176]: 3.13 scrub starts
Nov 29 05:11:18 compute-0 ceph-mon[75176]: 3.13 scrub ok
Nov 29 05:11:18 compute-0 ceph-mon[75176]: pgmap v130: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 1 keys/s, 1 objects/s recovering
Nov 29 05:11:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 317 B/s, 1 keys/s, 1 objects/s recovering
Nov 29 05:11:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 05:11:19 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 05:11:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 05:11:19 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 05:11:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 29 05:11:19 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 05:11:19 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 05:11:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 05:11:19 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 29 05:11:19 compute-0 ceph-mon[75176]: 3.14 scrub starts
Nov 29 05:11:19 compute-0 ceph-mon[75176]: 3.14 scrub ok
Nov 29 05:11:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 05:11:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 05:11:19 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 62 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=15.738649368s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=35'39 mlcod 35'39 active pruub 112.637779236s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:19 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 62 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=15.738382339s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 112.637779236s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:19 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 62 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=15.737648964s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=35'39 mlcod 35'39 active pruub 112.637100220s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:19 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 62 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=52/53 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=15.737483025s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 112.637100220s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:19 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:19 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:19 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 29 05:11:19 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 29 05:11:19 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 29 05:11:19 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 29 05:11:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 29 05:11:20 compute-0 ceph-mon[75176]: pgmap v131: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 317 B/s, 1 keys/s, 1 objects/s recovering
Nov 29 05:11:20 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 05:11:20 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 05:11:20 compute-0 ceph-mon[75176]: osdmap e62: 3 total, 3 up, 3 in
Nov 29 05:11:20 compute-0 ceph-mon[75176]: 4.17 scrub starts
Nov 29 05:11:20 compute-0 ceph-mon[75176]: 4.17 scrub ok
Nov 29 05:11:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 05:11:20 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 29 05:11:20 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 63 pg[6.5( v 35'39 lc 31'11 (0'0,35'39] local-lis/les=62/63 n=2 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:20 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 63 pg[6.d( v 35'39 lc 31'13 (0'0,35'39] local-lis/les=62/63 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:20 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Nov 29 05:11:20 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Nov 29 05:11:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 318 B/s, 1 keys/s, 1 objects/s recovering
Nov 29 05:11:21 compute-0 ceph-mon[75176]: 3.19 scrub starts
Nov 29 05:11:21 compute-0 ceph-mon[75176]: 3.19 scrub ok
Nov 29 05:11:21 compute-0 ceph-mon[75176]: osdmap e63: 3 total, 3 up, 3 in
Nov 29 05:11:21 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 29 05:11:21 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 29 05:11:22 compute-0 ceph-mon[75176]: 3.1a deep-scrub starts
Nov 29 05:11:22 compute-0 ceph-mon[75176]: 3.1a deep-scrub ok
Nov 29 05:11:22 compute-0 ceph-mon[75176]: pgmap v134: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 318 B/s, 1 keys/s, 1 objects/s recovering
Nov 29 05:11:22 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 29 05:11:22 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 29 05:11:22 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 29 05:11:22 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 29 05:11:22 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 29 05:11:22 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 29 05:11:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 195 B/s, 0 objects/s recovering
Nov 29 05:11:23 compute-0 ceph-mon[75176]: 3.1c scrub starts
Nov 29 05:11:23 compute-0 ceph-mon[75176]: 3.1c scrub ok
Nov 29 05:11:23 compute-0 ceph-mon[75176]: 4.19 scrub starts
Nov 29 05:11:23 compute-0 ceph-mon[75176]: 4.19 scrub ok
Nov 29 05:11:23 compute-0 ceph-mon[75176]: 5.b scrub starts
Nov 29 05:11:23 compute-0 ceph-mon[75176]: 5.b scrub ok
Nov 29 05:11:23 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 29 05:11:23 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 29 05:11:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:24 compute-0 ceph-mon[75176]: 7.7 scrub starts
Nov 29 05:11:24 compute-0 ceph-mon[75176]: 7.7 scrub ok
Nov 29 05:11:24 compute-0 ceph-mon[75176]: pgmap v135: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 195 B/s, 0 objects/s recovering
Nov 29 05:11:24 compute-0 ceph-mon[75176]: 4.1d scrub starts
Nov 29 05:11:24 compute-0 ceph-mon[75176]: 4.1d scrub ok
Nov 29 05:11:24 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 29 05:11:24 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 29 05:11:24 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 29 05:11:24 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 29 05:11:24 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.b deep-scrub starts
Nov 29 05:11:24 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.b deep-scrub ok
Nov 29 05:11:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v136: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 215 B/s, 1 objects/s recovering
Nov 29 05:11:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 05:11:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 05:11:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 05:11:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 05:11:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 29 05:11:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 05:11:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 05:11:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 05:11:25 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 29 05:11:25 compute-0 ceph-mon[75176]: 4.1e scrub starts
Nov 29 05:11:25 compute-0 ceph-mon[75176]: 2.10 scrub starts
Nov 29 05:11:25 compute-0 ceph-mon[75176]: 4.1e scrub ok
Nov 29 05:11:25 compute-0 ceph-mon[75176]: 2.10 scrub ok
Nov 29 05:11:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 05:11:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 05:11:25 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 29 05:11:25 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 29 05:11:26 compute-0 ceph-mon[75176]: 7.b deep-scrub starts
Nov 29 05:11:26 compute-0 ceph-mon[75176]: 7.b deep-scrub ok
Nov 29 05:11:26 compute-0 ceph-mon[75176]: pgmap v136: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 215 B/s, 1 objects/s recovering
Nov 29 05:11:26 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 05:11:26 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 05:11:26 compute-0 ceph-mon[75176]: osdmap e64: 3 total, 3 up, 3 in
Nov 29 05:11:26 compute-0 ceph-mon[75176]: 4.1f scrub starts
Nov 29 05:11:26 compute-0 ceph-mon[75176]: 4.1f scrub ok
Nov 29 05:11:26 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.d deep-scrub starts
Nov 29 05:11:26 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.d deep-scrub ok
Nov 29 05:11:26 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 29 05:11:26 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 29 05:11:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v138: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Nov 29 05:11:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 29 05:11:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 05:11:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 29 05:11:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.584018707s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.424026489s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.583935738s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.424026489s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589673996s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.430610657s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589609146s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.430610657s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589757919s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.431015015s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589647293s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.431015015s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589888573s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.431617737s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 64 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64 pruub=11.589848518s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.431617737s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 29 05:11:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 05:11:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 05:11:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 05:11:27 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 29 05:11:27 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.713256836s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 active pruub 121.377464294s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.713195801s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 121.377464294s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.713134766s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 active pruub 121.377655029s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.713078499s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 121.377655029s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-mon[75176]: 5.d deep-scrub starts
Nov 29 05:11:27 compute-0 ceph-mon[75176]: 5.d deep-scrub ok
Nov 29 05:11:27 compute-0 ceph-mon[75176]: 6.8 scrub starts
Nov 29 05:11:27 compute-0 ceph-mon[75176]: 6.8 scrub ok
Nov 29 05:11:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 05:11:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 05:11:27 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.712237358s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 active pruub 121.377822876s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65 pruub=11.711883545s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 121.377822876s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=65 pruub=10.695466995s) [2] r=-1 lpr=65 pi=[55,65)/1 crt=38'583 mlcod 0'0 active pruub 120.362739563s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=65) [2] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 65 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=65 pruub=10.695433617s) [2] r=-1 lpr=65 pi=[55,65)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 120.362739563s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=65) [2] r=0 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[47,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 65 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:27 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Nov 29 05:11:27 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Nov 29 05:11:27 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 29 05:11:27 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 29 05:11:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 29 05:11:28 compute-0 ceph-mon[75176]: pgmap v138: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Nov 29 05:11:28 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 05:11:28 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 05:11:28 compute-0 ceph-mon[75176]: osdmap e65: 3 total, 3 up, 3 in
Nov 29 05:11:28 compute-0 ceph-mon[75176]: 2.12 scrub starts
Nov 29 05:11:28 compute-0 ceph-mon[75176]: 2.12 scrub ok
Nov 29 05:11:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 05:11:28 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 29 05:11:28 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:28 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:28 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:28 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:28 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:28 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:28 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[55,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:28 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2]/[0] r=-1 lpr=66 pi=[55,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:28 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2]/[0] r=0 lpr=66 pi=[55,66)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:28 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2]/[0] r=0 lpr=66 pi=[55,66)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:28 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:28 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:28 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:28 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:28 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:28 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 66 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=56/57 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:28 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 66 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:28 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 66 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:28 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 66 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:28 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 66 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[47,65)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:28 compute-0 sudo[104840]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwzjkegllluqwrsijduxyawrejjpmtqy ; /usr/bin/python3'
Nov 29 05:11:28 compute-0 sudo[104840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:11:28 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 29 05:11:28 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 29 05:11:28 compute-0 python3[104842]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:11:29 compute-0 podman[104843]: 2025-11-29 05:11:29.018808422 +0000 UTC m=+0.045707453 container create 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:11:29 compute-0 systemd[1]: Started libpod-conmon-7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68.scope.
Nov 29 05:11:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:11:29 compute-0 podman[104843]: 2025-11-29 05:11:28.996506354 +0000 UTC m=+0.023405385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b547f92855f69286cbaa4f4905258e7d22f90bf0dc82328602bfafd191af287/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b547f92855f69286cbaa4f4905258e7d22f90bf0dc82328602bfafd191af287/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:29 compute-0 podman[104843]: 2025-11-29 05:11:29.112071669 +0000 UTC m=+0.138970760 container init 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:11:29 compute-0 podman[104843]: 2025-11-29 05:11:29.120092838 +0000 UTC m=+0.146991859 container start 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:11:29 compute-0 podman[104843]: 2025-11-29 05:11:29.123661383 +0000 UTC m=+0.150560464 container attach 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:11:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 29 05:11:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 05:11:29 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 29 05:11:29 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:29 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:29 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:29 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:29 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:29 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:29 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:29 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 67 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:29 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.469320297s) [2] async=[2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 122.020950317s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:29 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.469220161s) [2] async=[2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 122.020973206s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:29 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.469173431s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.020950317s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:29 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.466773033s) [2] async=[2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 122.018760681s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:29 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=65/66 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.466724396s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.018760681s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:29 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.468585014s) [2] async=[2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 122.020935059s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:29 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.468539238s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.020935059s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:29 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 67 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=65/66 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67 pruub=15.468110085s) [2] r=-1 lpr=67 pi=[47,67)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 122.020973206s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:29 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 67 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:29 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 67 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[55,66)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:29 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 67 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:29 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 67 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[56,66)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 29 05:11:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 05:11:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 29 05:11:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 05:11:29 compute-0 objective_gould[104858]: could not fetch user info: no user info saved
Nov 29 05:11:29 compute-0 systemd[1]: libpod-7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68.scope: Deactivated successfully.
Nov 29 05:11:29 compute-0 podman[104843]: 2025-11-29 05:11:29.391675906 +0000 UTC m=+0.418574947 container died 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:11:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b547f92855f69286cbaa4f4905258e7d22f90bf0dc82328602bfafd191af287-merged.mount: Deactivated successfully.
Nov 29 05:11:29 compute-0 podman[104843]: 2025-11-29 05:11:29.442613012 +0000 UTC m=+0.469512053 container remove 7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68 (image=quay.io/ceph/ceph:v18, name=objective_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:11:29 compute-0 systemd[1]: libpod-conmon-7e31b9ebbc05a92f5f87395732912e9dbce864b6547c9164f8d35fde5d98af68.scope: Deactivated successfully.
Nov 29 05:11:29 compute-0 sudo[104840]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:29 compute-0 ceph-mon[75176]: 7.d scrub starts
Nov 29 05:11:29 compute-0 ceph-mon[75176]: 7.d scrub ok
Nov 29 05:11:29 compute-0 ceph-mon[75176]: osdmap e66: 3 total, 3 up, 3 in
Nov 29 05:11:29 compute-0 ceph-mon[75176]: 5.1e scrub starts
Nov 29 05:11:29 compute-0 ceph-mon[75176]: 5.1e scrub ok
Nov 29 05:11:29 compute-0 ceph-mon[75176]: osdmap e67: 3 total, 3 up, 3 in
Nov 29 05:11:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 05:11:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 05:11:29 compute-0 sudo[104979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsvnbogxcfviwtinokeckynbwzrjenza ; /usr/bin/python3'
Nov 29 05:11:29 compute-0 sudo[104979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:11:29 compute-0 python3[104981]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:11:29 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 29 05:11:29 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 29 05:11:29 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 29 05:11:29 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 29 05:11:29 compute-0 podman[104982]: 2025-11-29 05:11:29.892253302 +0000 UTC m=+0.048648952 container create 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:11:29 compute-0 systemd[1]: Started libpod-conmon-330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46.scope.
Nov 29 05:11:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:11:29 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 29 05:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1677cda9adcb3b14b68600db95b7a3eb91e7ff7d918e599800dde1ea9238dd68/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1677cda9adcb3b14b68600db95b7a3eb91e7ff7d918e599800dde1ea9238dd68/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:29 compute-0 podman[104982]: 2025-11-29 05:11:29.875410484 +0000 UTC m=+0.031806124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 05:11:29 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 29 05:11:29 compute-0 podman[104982]: 2025-11-29 05:11:29.979570159 +0000 UTC m=+0.135965799 container init 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:11:29 compute-0 podman[104982]: 2025-11-29 05:11:29.985998741 +0000 UTC m=+0.142394361 container start 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:11:29 compute-0 podman[104982]: 2025-11-29 05:11:29.988671864 +0000 UTC m=+0.145067484 container attach 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]: {
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "user_id": "openstack",
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "display_name": "openstack",
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "email": "",
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "suspended": 0,
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "max_buckets": 1000,
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "subusers": [],
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "keys": [
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         {
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:             "user": "openstack",
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:             "access_key": "BVHCHSDCJ5LYYWQFI2Q3",
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:             "secret_key": "5v911KYTEXlGLdbwGYEKOjV4DFSvchdMwWFkshhZ"
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         }
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     ],
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "swift_keys": [],
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "caps": [],
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "op_mask": "read, write, delete",
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "default_placement": "",
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "default_storage_class": "",
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "placement_tags": [],
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "bucket_quota": {
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         "enabled": false,
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         "check_on_raw": false,
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         "max_size": -1,
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         "max_size_kb": 0,
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         "max_objects": -1
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     },
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "user_quota": {
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         "enabled": false,
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         "check_on_raw": false,
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         "max_size": -1,
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         "max_size_kb": 0,
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:         "max_objects": -1
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     },
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "temp_url_keys": [],
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "type": "rgw",
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]:     "mfa_ids": []
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]: }
Nov 29 05:11:30 compute-0 frosty_elgamal[104998]: 
Nov 29 05:11:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 29 05:11:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 05:11:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 05:11:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 05:11:30 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 29 05:11:30 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 68 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=8.875186920s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.430702209s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:30 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 68 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=8.875101089s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.430702209s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:30 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 68 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=8.875391006s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 116.431510925s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:30 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 68 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=8.875341415s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 116.431510925s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/55 les/c/f=67/56/0 sis=68) [2] r=0 lpr=68 pi=[55,68)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=66/55 les/c/f=67/56/0 sis=68) [2] r=0 lpr=68 pi=[55,68)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/55 les/c/f=67/56/0 sis=68 pruub=15.000617981s) [2] async=[2] r=-1 lpr=68 pi=[55,68)/1 crt=38'583 mlcod 38'583 active pruub 127.244735718s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=15.000761986s) [2] async=[2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 38'583 active pruub 127.244918823s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/55 les/c/f=67/56/0 sis=68 pruub=15.000556946s) [2] r=-1 lpr=68 pi=[55,68)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 127.244735718s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=66/67 n=6 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=15.000699043s) [2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 127.244918823s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=15.000412941s) [2] async=[2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 38'583 active pruub 127.245010376s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=15.000065804s) [2] async=[2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 38'583 active pruub 127.244720459s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=15.000247955s) [2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 127.245010376s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=66/67 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68 pruub=14.999962807s) [2] r=-1 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 127.244720459s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=68 pruub=14.846203804s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 127.091217041s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 68 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=68 pruub=14.846153259s) [2] r=-1 lpr=68 pi=[45,68)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.091217041s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.6( v 38'583 (0'0,38'583] local-lis/les=67/68 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 68 pg[9.e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=7 ec=47/32 lis/c=65/47 les/c/f=66/48/0 sis=67) [2] r=0 lpr=67 pi=[47,67)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:30 compute-0 systemd[1]: libpod-330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46.scope: Deactivated successfully.
Nov 29 05:11:30 compute-0 podman[104982]: 2025-11-29 05:11:30.206047488 +0000 UTC m=+0.362443158 container died 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:11:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1677cda9adcb3b14b68600db95b7a3eb91e7ff7d918e599800dde1ea9238dd68-merged.mount: Deactivated successfully.
Nov 29 05:11:30 compute-0 podman[104982]: 2025-11-29 05:11:30.249738112 +0000 UTC m=+0.406133762 container remove 330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46 (image=quay.io/ceph/ceph:v18, name=frosty_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 05:11:30 compute-0 systemd[1]: libpod-conmon-330c130271151f2d9f400cbeb8b6a7c5184c190e76fafbf241f1a572dcb0ba46.scope: Deactivated successfully.
Nov 29 05:11:30 compute-0 sudo[104979]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:30 compute-0 ceph-mon[75176]: pgmap v142: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:30 compute-0 ceph-mon[75176]: 5.e scrub starts
Nov 29 05:11:30 compute-0 ceph-mon[75176]: 5.e scrub ok
Nov 29 05:11:30 compute-0 ceph-mon[75176]: 2.18 scrub starts
Nov 29 05:11:30 compute-0 ceph-mon[75176]: 2.18 scrub ok
Nov 29 05:11:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 05:11:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 05:11:30 compute-0 ceph-mon[75176]: osdmap e68: 3 total, 3 up, 3 in
Nov 29 05:11:30 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 29 05:11:30 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 29 05:11:30 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 29 05:11:30 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 29 05:11:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 29 05:11:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 05:11:31 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 29 05:11:31 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 69 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:31 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 69 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:31 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 69 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:31 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 69 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:31 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:31 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:31 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:31 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[47,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:31 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.17( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:31 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:31 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.7( v 38'583 (0'0,38'583] local-lis/les=68/69 n=7 ec=47/32 lis/c=66/56 les/c/f=67/57/0 sis=68) [2] r=0 lpr=68 pi=[56,68)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:31 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=66/55 les/c/f=67/56/0 sis=68) [2] r=0 lpr=68 pi=[55,68)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:31 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 69 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=68/69 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=68) [2] r=0 lpr=68 pi=[45,68)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s; 208 B/s, 12 objects/s recovering
Nov 29 05:11:31 compute-0 ceph-mon[75176]: 7.10 scrub starts
Nov 29 05:11:31 compute-0 ceph-mon[75176]: 7.10 scrub ok
Nov 29 05:11:31 compute-0 ceph-mon[75176]: osdmap e69: 3 total, 3 up, 3 in
Nov 29 05:11:31 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 29 05:11:31 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 29 05:11:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 29 05:11:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 05:11:32 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 29 05:11:32 compute-0 ceph-mon[75176]: 7.12 scrub starts
Nov 29 05:11:32 compute-0 ceph-mon[75176]: 7.12 scrub ok
Nov 29 05:11:32 compute-0 ceph-mon[75176]: 2.19 scrub starts
Nov 29 05:11:32 compute-0 ceph-mon[75176]: 2.19 scrub ok
Nov 29 05:11:32 compute-0 ceph-mon[75176]: pgmap v145: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s; 208 B/s, 12 objects/s recovering
Nov 29 05:11:32 compute-0 ceph-mon[75176]: osdmap e70: 3 total, 3 up, 3 in
Nov 29 05:11:32 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 70 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=69/70 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:32 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 70 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=69/70 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[47,69)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 990 B/s rd, 0 op/s; 186 B/s, 11 objects/s recovering
Nov 29 05:11:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 29 05:11:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 05:11:33 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 29 05:11:33 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 71 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=69/70 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=15.196173668s) [2] async=[2] r=-1 lpr=71 pi=[47,71)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 126.239318848s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:33 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 71 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=69/70 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=15.196031570s) [2] r=-1 lpr=71 pi=[47,71)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.239318848s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:33 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 71 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=69/70 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=15.192161560s) [2] async=[2] r=-1 lpr=71 pi=[47,71)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 126.235893250s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:33 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 71 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=69/70 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71 pruub=15.192124367s) [2] r=-1 lpr=71 pi=[47,71)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.235893250s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:33 compute-0 ceph-mon[75176]: 7.14 scrub starts
Nov 29 05:11:33 compute-0 ceph-mon[75176]: 7.14 scrub ok
Nov 29 05:11:33 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 71 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:33 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 71 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:33 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 71 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:33 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 71 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:33 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Nov 29 05:11:33 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Nov 29 05:11:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 29 05:11:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 05:11:34 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 29 05:11:34 compute-0 ceph-mon[75176]: pgmap v147: 305 pgs: 4 active+remapped, 4 peering, 297 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 990 B/s rd, 0 op/s; 186 B/s, 11 objects/s recovering
Nov 29 05:11:34 compute-0 ceph-mon[75176]: osdmap e71: 3 total, 3 up, 3 in
Nov 29 05:11:34 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 72 pg[9.8( v 38'583 (0'0,38'583] local-lis/les=71/72 n=7 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:34 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 72 pg[9.18( v 38'583 (0'0,38'583] local-lis/les=71/72 n=6 ec=47/32 lis/c=69/47 les/c/f=70/48/0 sis=71) [2] r=0 lpr=71 pi=[47,71)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:34 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Nov 29 05:11:34 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Nov 29 05:11:34 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Nov 29 05:11:34 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Nov 29 05:11:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 496 B/s wr, 3 op/s; 53 B/s, 3 objects/s recovering
Nov 29 05:11:35 compute-0 ceph-mon[75176]: 2.14 deep-scrub starts
Nov 29 05:11:35 compute-0 ceph-mon[75176]: 2.14 deep-scrub ok
Nov 29 05:11:35 compute-0 ceph-mon[75176]: osdmap e72: 3 total, 3 up, 3 in
Nov 29 05:11:36 compute-0 ceph-mon[75176]: 5.10 deep-scrub starts
Nov 29 05:11:36 compute-0 ceph-mon[75176]: 5.10 deep-scrub ok
Nov 29 05:11:36 compute-0 ceph-mon[75176]: 2.16 deep-scrub starts
Nov 29 05:11:36 compute-0 ceph-mon[75176]: 2.16 deep-scrub ok
Nov 29 05:11:36 compute-0 ceph-mon[75176]: pgmap v150: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 496 B/s wr, 3 op/s; 53 B/s, 3 objects/s recovering
Nov 29 05:11:36 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 29 05:11:36 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 29 05:11:36 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 29 05:11:36 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 29 05:11:36 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 29 05:11:36 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 29 05:11:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v151: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s; 36 B/s, 2 objects/s recovering
Nov 29 05:11:37 compute-0 ceph-mon[75176]: 7.16 scrub starts
Nov 29 05:11:37 compute-0 ceph-mon[75176]: 7.16 scrub ok
Nov 29 05:11:37 compute-0 ceph-mon[75176]: 5.14 scrub starts
Nov 29 05:11:37 compute-0 ceph-mon[75176]: 5.14 scrub ok
Nov 29 05:11:37 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 29 05:11:37 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 29 05:11:38 compute-0 ceph-mon[75176]: 2.1a scrub starts
Nov 29 05:11:38 compute-0 ceph-mon[75176]: 2.1a scrub ok
Nov 29 05:11:38 compute-0 ceph-mon[75176]: pgmap v151: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s; 36 B/s, 2 objects/s recovering
Nov 29 05:11:38 compute-0 ceph-mon[75176]: 10.1e scrub starts
Nov 29 05:11:38 compute-0 ceph-mon[75176]: 10.1e scrub ok
Nov 29 05:11:38 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 29 05:11:38 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 29 05:11:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 287 B/s wr, 1 op/s; 30 B/s, 1 objects/s recovering
Nov 29 05:11:39 compute-0 ceph-mon[75176]: 2.13 scrub starts
Nov 29 05:11:39 compute-0 ceph-mon[75176]: 2.13 scrub ok
Nov 29 05:11:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 29 05:11:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 29 05:11:40 compute-0 ceph-mon[75176]: pgmap v152: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 287 B/s wr, 1 op/s; 30 B/s, 1 objects/s recovering
Nov 29 05:11:40 compute-0 ceph-mon[75176]: 2.11 scrub starts
Nov 29 05:11:40 compute-0 ceph-mon[75176]: 2.11 scrub ok
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:11:41
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Some PGs (0.006557) are inactive; try again later
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 27 B/s, 1 objects/s recovering
Nov 29 05:11:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 05:11:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 05:11:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 05:11:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:11:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:11:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 29 05:11:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 05:11:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 05:11:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 05:11:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 05:11:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 05:11:41 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 29 05:11:41 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 29 05:11:41 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 29 05:11:41 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 29 05:11:41 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 29 05:11:42 compute-0 ceph-mon[75176]: pgmap v153: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s; 27 B/s, 1 objects/s recovering
Nov 29 05:11:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 05:11:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 05:11:42 compute-0 ceph-mon[75176]: osdmap e73: 3 total, 3 up, 3 in
Nov 29 05:11:42 compute-0 ceph-mon[75176]: 7.17 scrub starts
Nov 29 05:11:42 compute-0 ceph-mon[75176]: 7.17 scrub ok
Nov 29 05:11:42 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 73 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=8.403535843s) [0] r=-1 lpr=73 pi=[52,73)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 128.637680054s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:42 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 73 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=52/53 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=73 pruub=8.403483391s) [0] r=-1 lpr=73 pi=[52,73)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.637680054s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:42 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=73) [0] r=0 lpr=73 pi=[52,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:42 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 29 05:11:42 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 29 05:11:42 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Nov 29 05:11:42 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Nov 29 05:11:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 355 B/s rd, 118 B/s wr, 0 op/s
Nov 29 05:11:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 05:11:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 05:11:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 05:11:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 05:11:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 29 05:11:43 compute-0 ceph-mon[75176]: 2.1e scrub starts
Nov 29 05:11:43 compute-0 ceph-mon[75176]: 2.1e scrub ok
Nov 29 05:11:43 compute-0 ceph-mon[75176]: 2.f deep-scrub starts
Nov 29 05:11:43 compute-0 ceph-mon[75176]: 2.f deep-scrub ok
Nov 29 05:11:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 05:11:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 05:11:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 05:11:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 05:11:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 05:11:43 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 29 05:11:43 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 74 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=73/74 n=1 ec=45/19 lis/c=52/52 les/c/f=53/53/0 sis=73) [0] r=0 lpr=73 pi=[52,73)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:43 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 29 05:11:43 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1b deep-scrub starts
Nov 29 05:11:43 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 74 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=45/19 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=9.426406860s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=35'39 lcod 0'0 mlcod 0'0 active pruub 130.664230347s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:43 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 74 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=45/19 lis/c=54/54 les/c/f=55/55/0 sis=74 pruub=9.425595284s) [0] r=-1 lpr=74 pi=[54,74)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 130.664230347s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:43 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 74 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=54/54 les/c/f=55/55/0 sis=74) [0] r=0 lpr=74 pi=[54,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:43 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 29 05:11:43 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1b deep-scrub ok
Nov 29 05:11:43 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 29 05:11:43 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 29 05:11:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 29 05:11:44 compute-0 ceph-mon[75176]: 5.17 scrub starts
Nov 29 05:11:44 compute-0 ceph-mon[75176]: 5.17 scrub ok
Nov 29 05:11:44 compute-0 ceph-mon[75176]: pgmap v155: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 355 B/s rd, 118 B/s wr, 0 op/s
Nov 29 05:11:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 05:11:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 05:11:44 compute-0 ceph-mon[75176]: osdmap e74: 3 total, 3 up, 3 in
Nov 29 05:11:44 compute-0 ceph-mon[75176]: 7.19 scrub starts
Nov 29 05:11:44 compute-0 ceph-mon[75176]: 5.1b deep-scrub starts
Nov 29 05:11:44 compute-0 ceph-mon[75176]: 7.19 scrub ok
Nov 29 05:11:44 compute-0 ceph-mon[75176]: 10.7 scrub starts
Nov 29 05:11:44 compute-0 ceph-mon[75176]: 10.7 scrub ok
Nov 29 05:11:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 05:11:44 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 29 05:11:44 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 75 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=74/75 n=1 ec=45/19 lis/c=54/54 les/c/f=55/55/0 sis=74) [0] r=0 lpr=74 pi=[54,74)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:45 compute-0 sshd-session[105095]: Accepted publickey for zuul from 192.168.122.30 port 57178 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:11:45 compute-0 systemd-logind[793]: New session 33 of user zuul.
Nov 29 05:11:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 05:11:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 05:11:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 05:11:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 05:11:45 compute-0 systemd[1]: Started Session 33 of User zuul.
Nov 29 05:11:45 compute-0 sshd-session[105095]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:11:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 29 05:11:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 05:11:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 05:11:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 05:11:45 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 29 05:11:45 compute-0 ceph-mon[75176]: 5.1b deep-scrub ok
Nov 29 05:11:45 compute-0 ceph-mon[75176]: osdmap e75: 3 total, 3 up, 3 in
Nov 29 05:11:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 05:11:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 05:11:46 compute-0 python3.9[105248]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:11:46 compute-0 ceph-mon[75176]: pgmap v158: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 05:11:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 05:11:46 compute-0 ceph-mon[75176]: osdmap e76: 3 total, 3 up, 3 in
Nov 29 05:11:46 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Nov 29 05:11:46 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Nov 29 05:11:46 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 76 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=76 pruub=13.412634850s) [1] r=-1 lpr=76 pi=[58,76)/1 crt=35'39 mlcod 35'39 active pruub 142.429672241s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:46 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 76 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=76 pruub=13.412522316s) [1] r=-1 lpr=76 pi=[58,76)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 142.429672241s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:46 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 76 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=76) [1] r=0 lpr=76 pi=[58,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 29 05:11:47 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 05:11:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 29 05:11:47 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 05:11:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 29 05:11:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 05:11:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 05:11:47 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 05:11:47 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 05:11:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 05:11:47 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 29 05:11:47 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 77 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77 pruub=15.211524963s) [2] r=-1 lpr=77 pi=[47,77)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 140.425521851s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:47 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 77 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77 pruub=15.211388588s) [2] r=-1 lpr=77 pi=[47,77)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.425521851s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:47 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 77 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77 pruub=15.224774361s) [2] r=-1 lpr=77 pi=[47,77)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 140.439880371s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:47 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 77 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77 pruub=15.224705696s) [2] r=-1 lpr=77 pi=[47,77)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 140.439880371s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:47 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 77 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=76/77 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=76) [1] r=0 lpr=76 pi=[58,76)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77) [2] r=0 lpr=77 pi=[47,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:47 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=77) [2] r=0 lpr=77 pi=[47,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:47 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 29 05:11:47 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 29 05:11:48 compute-0 sudo[105464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipzlodfeyshxwsxfhlfztjurvqcyhgxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393107.4798396-32-251570075127761/AnsiballZ_command.py'
Nov 29 05:11:48 compute-0 sudo[105464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:11:48 compute-0 python3.9[105466]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:11:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 29 05:11:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 05:11:48 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 29 05:11:48 compute-0 ceph-mon[75176]: 5.1c deep-scrub starts
Nov 29 05:11:48 compute-0 ceph-mon[75176]: 5.1c deep-scrub ok
Nov 29 05:11:48 compute-0 ceph-mon[75176]: pgmap v160: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:11:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 05:11:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 05:11:48 compute-0 ceph-mon[75176]: osdmap e77: 3 total, 3 up, 3 in
Nov 29 05:11:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 78 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 78 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 78 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:48 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 78 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=47/48 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:48 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[47,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:48 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[47,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:48 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[47,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:48 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[47,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 05:11:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 29 05:11:49 compute-0 ceph-mon[75176]: 5.1f scrub starts
Nov 29 05:11:49 compute-0 ceph-mon[75176]: 5.1f scrub ok
Nov 29 05:11:49 compute-0 ceph-mon[75176]: osdmap e78: 3 total, 3 up, 3 in
Nov 29 05:11:49 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Nov 29 05:11:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 05:11:49 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 29 05:11:49 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Nov 29 05:11:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 79 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=6 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:49 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 79 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=7 ec=47/32 lis/c=47/47 les/c/f=48/48/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[47,78)/1 crt=38'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:49 compute-0 sudo[105477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:11:49 compute-0 sudo[105477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:49 compute-0 sudo[105477]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:50 compute-0 sudo[105504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:11:50 compute-0 sudo[105504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:50 compute-0 sudo[105504]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:50 compute-0 sudo[105529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:11:50 compute-0 sudo[105529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:50 compute-0 sudo[105529]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:50 compute-0 sudo[105554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:11:50 compute-0 sudo[105554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:50 compute-0 sudo[105554]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:11:50 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:11:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:11:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:11:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:11:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:11:50 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 286051a6-d671-4dd3-8a75-0b2cc1f8ff52 does not exist
Nov 29 05:11:50 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a250bf51-f3c1-4ce2-85e2-cb8b89a33a48 does not exist
Nov 29 05:11:50 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 20c5587b-4f55-474b-bdc4-a5e09dd63767 does not exist
Nov 29 05:11:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:11:50 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:11:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:11:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:11:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:11:50 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:11:50 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 29 05:11:50 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 29 05:11:50 compute-0 sudo[105609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:11:50 compute-0 sudo[105609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:50 compute-0 sudo[105609]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 29 05:11:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 05:11:50 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 29 05:11:50 compute-0 ceph-mon[75176]: pgmap v163: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 05:11:50 compute-0 ceph-mon[75176]: 10.4 scrub starts
Nov 29 05:11:50 compute-0 ceph-mon[75176]: osdmap e79: 3 total, 3 up, 3 in
Nov 29 05:11:50 compute-0 ceph-mon[75176]: 10.4 scrub ok
Nov 29 05:11:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:11:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:11:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:11:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:11:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:11:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:11:50 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 80 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=7 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80 pruub=14.997946739s) [2] async=[2] r=-1 lpr=80 pi=[47,80)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 143.250976562s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:50 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 80 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=7 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80 pruub=14.997536659s) [2] r=-1 lpr=80 pi=[47,80)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.250976562s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:50 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 80 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=6 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80 pruub=14.995174408s) [2] async=[2] r=-1 lpr=80 pi=[47,80)/1 crt=38'583 lcod 0'0 mlcod 0'0 active pruub 143.249145508s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:50 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 80 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=78/79 n=6 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80 pruub=14.995050430s) [2] r=-1 lpr=80 pi=[47,80)/1 crt=38'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.249145508s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:50 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 80 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:50 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 80 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:50 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 80 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:50 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 80 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=7 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:50 compute-0 sudo[105634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:11:50 compute-0 sudo[105634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:50 compute-0 sudo[105634]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:50 compute-0 sudo[105659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:11:50 compute-0 sudo[105659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:50 compute-0 sudo[105659]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:51 compute-0 sudo[105684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:11:51 compute-0 sudo[105684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:11:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 05:11:51 compute-0 podman[105750]: 2025-11-29 05:11:51.497867892 +0000 UTC m=+0.053234081 container create 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:11:51 compute-0 systemd[1]: Started libpod-conmon-97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c.scope.
Nov 29 05:11:51 compute-0 podman[105750]: 2025-11-29 05:11:51.469055185 +0000 UTC m=+0.024421434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:11:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:11:51 compute-0 podman[105750]: 2025-11-29 05:11:51.606788211 +0000 UTC m=+0.162154450 container init 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:11:51 compute-0 podman[105750]: 2025-11-29 05:11:51.621190909 +0000 UTC m=+0.176557108 container start 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:11:51 compute-0 podman[105750]: 2025-11-29 05:11:51.625072441 +0000 UTC m=+0.180438630 container attach 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:11:51 compute-0 systemd[1]: libpod-97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c.scope: Deactivated successfully.
Nov 29 05:11:51 compute-0 keen_shannon[105769]: 167 167
Nov 29 05:11:51 compute-0 conmon[105769]: conmon 97941308151bd1bc5ca1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c.scope/container/memory.events
Nov 29 05:11:51 compute-0 podman[105750]: 2025-11-29 05:11:51.633616121 +0000 UTC m=+0.188982310 container died 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 05:11:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4c6fe43e6c3169cc5f596cdc6d6037d5f2e441a7a69310394c11d6344311ffa-merged.mount: Deactivated successfully.
Nov 29 05:11:51 compute-0 podman[105750]: 2025-11-29 05:11:51.690114288 +0000 UTC m=+0.245480477 container remove 97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:11:51 compute-0 systemd[1]: libpod-conmon-97941308151bd1bc5ca1ba6b39a84350e9e578f9b3cf56b4b4b70fa70f921d2c.scope: Deactivated successfully.
Nov 29 05:11:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 29 05:11:51 compute-0 ceph-mon[75176]: 7.1d scrub starts
Nov 29 05:11:51 compute-0 ceph-mon[75176]: 7.1d scrub ok
Nov 29 05:11:51 compute-0 ceph-mon[75176]: osdmap e80: 3 total, 3 up, 3 in
Nov 29 05:11:51 compute-0 podman[105794]: 2025-11-29 05:11:51.880052251 +0000 UTC m=+0.056544169 container create 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:11:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 05:11:51 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 29 05:11:51 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 81 pg[9.c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=7 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:51 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 81 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=6 ec=47/32 lis/c=78/47 les/c/f=79/48/0 sis=80) [2] r=0 lpr=80 pi=[47,80)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:51 compute-0 systemd[1]: Started libpod-conmon-58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c.scope.
Nov 29 05:11:51 compute-0 podman[105794]: 2025-11-29 05:11:51.850037076 +0000 UTC m=+0.026529064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:11:51 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:52 compute-0 podman[105794]: 2025-11-29 05:11:52.008116819 +0000 UTC m=+0.184608797 container init 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:11:52 compute-0 podman[105794]: 2025-11-29 05:11:52.028344904 +0000 UTC m=+0.204836842 container start 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 05:11:52 compute-0 podman[105794]: 2025-11-29 05:11:52.032730277 +0000 UTC m=+0.209222215 container attach 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:11:52 compute-0 ceph-mon[75176]: pgmap v166: 305 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 05:11:52 compute-0 ceph-mon[75176]: osdmap e81: 3 total, 3 up, 3 in
Nov 29 05:11:53 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 29 05:11:53 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 29 05:11:53 compute-0 tender_ganguly[105811]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:11:53 compute-0 tender_ganguly[105811]: --> relative data size: 1.0
Nov 29 05:11:53 compute-0 tender_ganguly[105811]: --> All data devices are unavailable
Nov 29 05:11:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 05:11:53 compute-0 systemd[1]: libpod-58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c.scope: Deactivated successfully.
Nov 29 05:11:53 compute-0 systemd[1]: libpod-58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c.scope: Consumed 1.238s CPU time.
Nov 29 05:11:53 compute-0 podman[105794]: 2025-11-29 05:11:53.324208676 +0000 UTC m=+1.500700614 container died 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 05:11:53 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 29 05:11:53 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 29 05:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d28bb2e888c5d55c758b6b969e7eebe4a40d1940bd5d1813e1c8ef4433fb39a4-merged.mount: Deactivated successfully.
Nov 29 05:11:53 compute-0 ceph-mon[75176]: 10.3 scrub starts
Nov 29 05:11:53 compute-0 ceph-mon[75176]: 10.3 scrub ok
Nov 29 05:11:53 compute-0 podman[105794]: 2025-11-29 05:11:53.986561277 +0000 UTC m=+2.163053215 container remove 58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 05:11:54 compute-0 systemd[1]: libpod-conmon-58512f121b08f4d96ea372834ca3edab8d6cb1132ae080a18d4ac8bdbe6fc78c.scope: Deactivated successfully.
Nov 29 05:11:54 compute-0 sudo[105684]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:54 compute-0 sudo[105858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:11:54 compute-0 sudo[105858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:54 compute-0 sudo[105858]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:54 compute-0 sudo[105883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:11:54 compute-0 sudo[105883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:54 compute-0 sudo[105883]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:54 compute-0 sudo[105908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:11:54 compute-0 sudo[105908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:54 compute-0 sudo[105908]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:54 compute-0 sudo[105933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:11:54 compute-0 sudo[105933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:54 compute-0 podman[105997]: 2025-11-29 05:11:54.707333239 +0000 UTC m=+0.039764375 container create 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 05:11:54 compute-0 systemd[1]: Started libpod-conmon-7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909.scope.
Nov 29 05:11:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:11:54 compute-0 podman[105997]: 2025-11-29 05:11:54.774861095 +0000 UTC m=+0.107292241 container init 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:11:54 compute-0 podman[105997]: 2025-11-29 05:11:54.782019734 +0000 UTC m=+0.114450870 container start 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:11:54 compute-0 podman[105997]: 2025-11-29 05:11:54.688073928 +0000 UTC m=+0.020505074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:11:54 compute-0 podman[105997]: 2025-11-29 05:11:54.785585198 +0000 UTC m=+0.118016354 container attach 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:11:54 compute-0 boring_aryabhata[106018]: 167 167
Nov 29 05:11:54 compute-0 systemd[1]: libpod-7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909.scope: Deactivated successfully.
Nov 29 05:11:54 compute-0 conmon[106018]: conmon 7e8d87b9b161bbcb9227 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909.scope/container/memory.events
Nov 29 05:11:54 compute-0 podman[105997]: 2025-11-29 05:11:54.789213793 +0000 UTC m=+0.121644929 container died 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Nov 29 05:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d294f2027b7a96ad5a030fae906ef3dcc47d967df7d9e34bcfedaf5ea094d02-merged.mount: Deactivated successfully.
Nov 29 05:11:54 compute-0 podman[105997]: 2025-11-29 05:11:54.832738266 +0000 UTC m=+0.165169412 container remove 7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_aryabhata, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:11:54 compute-0 systemd[1]: libpod-conmon-7e8d87b9b161bbcb9227f25fbfcde14398e6adcabf7a45a2541693e4fa2da909.scope: Deactivated successfully.
Nov 29 05:11:54 compute-0 ceph-mon[75176]: pgmap v168: 305 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 302 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 05:11:54 compute-0 ceph-mon[75176]: 5.15 scrub starts
Nov 29 05:11:54 compute-0 ceph-mon[75176]: 5.15 scrub ok
Nov 29 05:11:54 compute-0 podman[106042]: 2025-11-29 05:11:54.992385826 +0000 UTC m=+0.039584560 container create b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 05:11:55 compute-0 systemd[1]: Started libpod-conmon-b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab.scope.
Nov 29 05:11:55 compute-0 podman[106042]: 2025-11-29 05:11:54.975290875 +0000 UTC m=+0.022489629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:11:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe93fa385c0d2dc6f0570f8a758ba80daea682e27aaf0f124e5068c43b482405/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe93fa385c0d2dc6f0570f8a758ba80daea682e27aaf0f124e5068c43b482405/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe93fa385c0d2dc6f0570f8a758ba80daea682e27aaf0f124e5068c43b482405/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe93fa385c0d2dc6f0570f8a758ba80daea682e27aaf0f124e5068c43b482405/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:55 compute-0 podman[106042]: 2025-11-29 05:11:55.110170163 +0000 UTC m=+0.157368987 container init b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:11:55 compute-0 podman[106042]: 2025-11-29 05:11:55.120376533 +0000 UTC m=+0.167575267 container start b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:11:55 compute-0 podman[106042]: 2025-11-29 05:11:55.123993728 +0000 UTC m=+0.171192522 container attach b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 05:11:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 05:11:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 29 05:11:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 05:11:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 29 05:11:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 05:11:55 compute-0 sudo[105464]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:55 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 29 05:11:55 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 29 05:11:55 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 29 05:11:55 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 29 05:11:55 compute-0 cool_mestorf[106061]: {
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:     "0": [
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:         {
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "devices": [
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "/dev/loop3"
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             ],
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_name": "ceph_lv0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_size": "21470642176",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "name": "ceph_lv0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "tags": {
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.cluster_name": "ceph",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.crush_device_class": "",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.encrypted": "0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.osd_id": "0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.type": "block",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.vdo": "0"
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             },
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "type": "block",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "vg_name": "ceph_vg0"
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:         }
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:     ],
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:     "1": [
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:         {
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "devices": [
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "/dev/loop4"
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             ],
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_name": "ceph_lv1",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_size": "21470642176",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "name": "ceph_lv1",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "tags": {
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.cluster_name": "ceph",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.crush_device_class": "",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.encrypted": "0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.osd_id": "1",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.type": "block",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.vdo": "0"
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             },
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "type": "block",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "vg_name": "ceph_vg1"
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:         }
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:     ],
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:     "2": [
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:         {
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "devices": [
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "/dev/loop5"
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             ],
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_name": "ceph_lv2",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_size": "21470642176",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "name": "ceph_lv2",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "tags": {
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.cluster_name": "ceph",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.crush_device_class": "",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.encrypted": "0",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.osd_id": "2",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.type": "block",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:                 "ceph.vdo": "0"
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             },
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "type": "block",
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:             "vg_name": "ceph_vg2"
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:         }
Nov 29 05:11:55 compute-0 cool_mestorf[106061]:     ]
Nov 29 05:11:55 compute-0 cool_mestorf[106061]: }
Nov 29 05:11:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 29 05:11:55 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Nov 29 05:11:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 05:11:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 05:11:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 05:11:55 compute-0 systemd[1]: libpod-b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab.scope: Deactivated successfully.
Nov 29 05:11:55 compute-0 podman[106042]: 2025-11-29 05:11:55.98694516 +0000 UTC m=+1.034143924 container died b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:11:55 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 29 05:11:55 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 82 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=62/63 n=1 ec=45/19 lis/c=62/62 les/c/f=63/63/0 sis=82 pruub=12.563106537s) [1] r=-1 lpr=82 pi=[62,82)/1 crt=35'39 mlcod 35'39 active pruub 150.616012573s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:11:55 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 82 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=62/63 n=1 ec=45/19 lis/c=62/62 les/c/f=63/63/0 sis=82 pruub=12.563038826s) [1] r=-1 lpr=82 pi=[62,82)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 150.616012573s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:11:55 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 05:11:55 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 05:11:55 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Nov 29 05:11:56 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 82 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=62/62 les/c/f=63/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe93fa385c0d2dc6f0570f8a758ba80daea682e27aaf0f124e5068c43b482405-merged.mount: Deactivated successfully.
Nov 29 05:11:56 compute-0 sshd-session[105098]: Connection closed by 192.168.122.30 port 57178
Nov 29 05:11:56 compute-0 podman[106042]: 2025-11-29 05:11:56.057251752 +0000 UTC m=+1.104450476 container remove b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:11:56 compute-0 sshd-session[105095]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:11:56 compute-0 systemd[1]: libpod-conmon-b890490584b3e40e666c53bc18b19c716b19b0e393dc8e6cdef2edb6b9375fab.scope: Deactivated successfully.
Nov 29 05:11:56 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Nov 29 05:11:56 compute-0 systemd[1]: session-33.scope: Consumed 8.551s CPU time.
Nov 29 05:11:56 compute-0 systemd-logind[793]: Session 33 logged out. Waiting for processes to exit.
Nov 29 05:11:56 compute-0 systemd-logind[793]: Removed session 33.
Nov 29 05:11:56 compute-0 sudo[105933]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:56 compute-0 sudo[106107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:11:56 compute-0 sudo[106107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:56 compute-0 sudo[106107]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:56 compute-0 sudo[106132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:11:56 compute-0 sudo[106132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:56 compute-0 sudo[106132]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:56 compute-0 sudo[106157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:11:56 compute-0 sudo[106157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:56 compute-0 sudo[106157]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:56 compute-0 sudo[106182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:11:56 compute-0 sudo[106182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:56 compute-0 podman[106247]: 2025-11-29 05:11:56.652304001 +0000 UTC m=+0.034984043 container create 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:11:56 compute-0 systemd[1]: Started libpod-conmon-5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16.scope.
Nov 29 05:11:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:11:56 compute-0 podman[106247]: 2025-11-29 05:11:56.73096873 +0000 UTC m=+0.113648772 container init 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:11:56 compute-0 podman[106247]: 2025-11-29 05:11:56.637615486 +0000 UTC m=+0.020295548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:11:56 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 29 05:11:56 compute-0 podman[106247]: 2025-11-29 05:11:56.737043922 +0000 UTC m=+0.119723954 container start 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:11:56 compute-0 podman[106247]: 2025-11-29 05:11:56.740051182 +0000 UTC m=+0.122731244 container attach 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 05:11:56 compute-0 nervous_napier[106264]: 167 167
Nov 29 05:11:56 compute-0 systemd[1]: libpod-5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16.scope: Deactivated successfully.
Nov 29 05:11:56 compute-0 podman[106247]: 2025-11-29 05:11:56.74248881 +0000 UTC m=+0.125168852 container died 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 05:11:56 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 29 05:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1068e914adbb856978618698790c627e6792772b3f39c07dc3ffb76bdcbbc52-merged.mount: Deactivated successfully.
Nov 29 05:11:56 compute-0 podman[106247]: 2025-11-29 05:11:56.783885432 +0000 UTC m=+0.166565474 container remove 5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 05:11:56 compute-0 systemd[1]: libpod-conmon-5889a9d1ccc0b05661b59d2fc6af24c8f8b0d18b9198c253a1c22b82cc3ecf16.scope: Deactivated successfully.
Nov 29 05:11:56 compute-0 podman[106288]: 2025-11-29 05:11:56.933015516 +0000 UTC m=+0.039061029 container create 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:11:56 compute-0 systemd[1]: Started libpod-conmon-06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4.scope.
Nov 29 05:11:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 29 05:11:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:11:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 05:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fb3e4a2a79bec70edb52f52d2dacad509b04d31cd79095f02035bb21709521/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fb3e4a2a79bec70edb52f52d2dacad509b04d31cd79095f02035bb21709521/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fb3e4a2a79bec70edb52f52d2dacad509b04d31cd79095f02035bb21709521/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77fb3e4a2a79bec70edb52f52d2dacad509b04d31cd79095f02035bb21709521/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:11:57 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 29 05:11:57 compute-0 podman[106288]: 2025-11-29 05:11:56.914409639 +0000 UTC m=+0.020455182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:11:57 compute-0 ceph-mon[75176]: pgmap v169: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 05:11:57 compute-0 ceph-mon[75176]: 7.1e scrub starts
Nov 29 05:11:57 compute-0 ceph-mon[75176]: 7.1e scrub ok
Nov 29 05:11:57 compute-0 ceph-mon[75176]: 5.7 scrub starts
Nov 29 05:11:57 compute-0 ceph-mon[75176]: 5.7 scrub ok
Nov 29 05:11:57 compute-0 ceph-mon[75176]: 10.5 deep-scrub starts
Nov 29 05:11:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 05:11:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 05:11:57 compute-0 ceph-mon[75176]: osdmap e82: 3 total, 3 up, 3 in
Nov 29 05:11:57 compute-0 ceph-mon[75176]: 10.5 deep-scrub ok
Nov 29 05:11:57 compute-0 podman[106288]: 2025-11-29 05:11:57.016719592 +0000 UTC m=+0.122765105 container init 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 05:11:57 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 83 pg[6.d( v 35'39 lc 31'13 (0'0,35'39] local-lis/les=82/83 n=1 ec=45/19 lis/c=62/62 les/c/f=63/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:11:57 compute-0 podman[106288]: 2025-11-29 05:11:57.029524643 +0000 UTC m=+0.135570186 container start 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:11:57 compute-0 podman[106288]: 2025-11-29 05:11:57.033860075 +0000 UTC m=+0.139905608 container attach 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:11:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 05:11:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 05:11:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 05:11:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 05:11:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 05:11:57 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 29 05:11:57 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 29 05:11:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 29 05:11:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 05:11:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 05:11:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 05:11:58 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 29 05:11:58 compute-0 romantic_payne[106304]: {
Nov 29 05:11:58 compute-0 romantic_payne[106304]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "osd_id": 0,
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "type": "bluestore"
Nov 29 05:11:58 compute-0 romantic_payne[106304]:     },
Nov 29 05:11:58 compute-0 romantic_payne[106304]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "osd_id": 1,
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "type": "bluestore"
Nov 29 05:11:58 compute-0 romantic_payne[106304]:     },
Nov 29 05:11:58 compute-0 romantic_payne[106304]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "osd_id": 2,
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:11:58 compute-0 romantic_payne[106304]:         "type": "bluestore"
Nov 29 05:11:58 compute-0 romantic_payne[106304]:     }
Nov 29 05:11:58 compute-0 romantic_payne[106304]: }
Nov 29 05:11:58 compute-0 ceph-mon[75176]: 8.1 scrub starts
Nov 29 05:11:58 compute-0 ceph-mon[75176]: 8.1 scrub ok
Nov 29 05:11:58 compute-0 ceph-mon[75176]: osdmap e83: 3 total, 3 up, 3 in
Nov 29 05:11:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 05:11:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 05:11:58 compute-0 systemd[1]: libpod-06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4.scope: Deactivated successfully.
Nov 29 05:11:58 compute-0 systemd[1]: libpod-06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4.scope: Consumed 1.053s CPU time.
Nov 29 05:11:58 compute-0 conmon[106304]: conmon 06df2283b28692d5aa1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4.scope/container/memory.events
Nov 29 05:11:58 compute-0 podman[106288]: 2025-11-29 05:11:58.079481009 +0000 UTC m=+1.185526532 container died 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:11:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-77fb3e4a2a79bec70edb52f52d2dacad509b04d31cd79095f02035bb21709521-merged.mount: Deactivated successfully.
Nov 29 05:11:58 compute-0 podman[106288]: 2025-11-29 05:11:58.131391318 +0000 UTC m=+1.237436851 container remove 06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:11:58 compute-0 systemd[1]: libpod-conmon-06df2283b28692d5aa1bbfa31b4e288191015d69f3df14c50b21bc11a48709c4.scope: Deactivated successfully.
Nov 29 05:11:58 compute-0 sudo[106182]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:11:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:11:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:11:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:11:58 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev e100450d-b0d5-4482-9732-0c64a871f559 does not exist
Nov 29 05:11:58 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 4c11a984-9d3f-4b95-99c7-b04d79c9e40d does not exist
Nov 29 05:11:58 compute-0 sudo[106351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:11:58 compute-0 sudo[106351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:58 compute-0 sudo[106351]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:58 compute-0 sudo[106376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:11:58 compute-0 sudo[106376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:11:58 compute-0 sudo[106376]: pam_unix(sudo:session): session closed for user root
Nov 29 05:11:58 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.5 deep-scrub starts
Nov 29 05:11:58 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.5 deep-scrub ok
Nov 29 05:11:58 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 29 05:11:59 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 29 05:11:59 compute-0 ceph-mon[75176]: pgmap v172: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 05:11:59 compute-0 ceph-mon[75176]: 8.3 scrub starts
Nov 29 05:11:59 compute-0 ceph-mon[75176]: 8.3 scrub ok
Nov 29 05:11:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 05:11:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 05:11:59 compute-0 ceph-mon[75176]: osdmap e84: 3 total, 3 up, 3 in
Nov 29 05:11:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:11:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:11:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:11:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 2 objects/s recovering
Nov 29 05:11:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 05:11:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 05:11:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 05:11:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 05:12:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 29 05:12:00 compute-0 ceph-mon[75176]: 8.5 deep-scrub starts
Nov 29 05:12:00 compute-0 ceph-mon[75176]: 8.5 deep-scrub ok
Nov 29 05:12:00 compute-0 ceph-mon[75176]: 10.8 scrub starts
Nov 29 05:12:00 compute-0 ceph-mon[75176]: 10.8 scrub ok
Nov 29 05:12:00 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 05:12:00 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 05:12:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 05:12:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 05:12:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 05:12:00 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 29 05:12:00 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 85 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=85 pruub=8.313556671s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=35'39 mlcod 35'39 active pruub 150.433609009s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:00 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 85 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=85 pruub=8.313452721s) [2] r=-1 lpr=85 pi=[58,85)/1 crt=35'39 mlcod 0'0 unknown NOTIFY pruub 150.433609009s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:00 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 85 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=85) [2] r=0 lpr=85 pi=[58,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:00 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 29 05:12:00 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 29 05:12:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 29 05:12:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 05:12:01 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 29 05:12:01 compute-0 ceph-mon[75176]: pgmap v174: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 2 objects/s recovering
Nov 29 05:12:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 05:12:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 05:12:01 compute-0 ceph-mon[75176]: osdmap e85: 3 total, 3 up, 3 in
Nov 29 05:12:01 compute-0 ceph-mon[75176]: 10.a scrub starts
Nov 29 05:12:01 compute-0 ceph-mon[75176]: 10.a scrub ok
Nov 29 05:12:01 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 86 pg[6.f( v 35'39 lc 31'1 (0'0,35'39] local-lis/les=85/86 n=1 ec=45/19 lis/c=58/58 les/c/f=59/59/0 sis=85) [2] r=0 lpr=85 pi=[58,85)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Nov 29 05:12:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 05:12:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 05:12:01 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Nov 29 05:12:01 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Nov 29 05:12:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 29 05:12:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 05:12:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 05:12:02 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 29 05:12:02 compute-0 ceph-mon[75176]: osdmap e86: 3 total, 3 up, 3 in
Nov 29 05:12:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 05:12:02 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 29 05:12:02 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 29 05:12:03 compute-0 ceph-mon[75176]: pgmap v177: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Nov 29 05:12:03 compute-0 ceph-mon[75176]: 8.7 deep-scrub starts
Nov 29 05:12:03 compute-0 ceph-mon[75176]: 8.7 deep-scrub ok
Nov 29 05:12:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 05:12:03 compute-0 ceph-mon[75176]: osdmap e87: 3 total, 3 up, 3 in
Nov 29 05:12:03 compute-0 ceph-mon[75176]: 2.b scrub starts
Nov 29 05:12:03 compute-0 ceph-mon[75176]: 2.b scrub ok
Nov 29 05:12:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 05:12:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 29 05:12:03 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 05:12:03 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 29 05:12:03 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 29 05:12:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 29 05:12:04 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 05:12:04 compute-0 ceph-mon[75176]: 2.8 scrub starts
Nov 29 05:12:04 compute-0 ceph-mon[75176]: 2.8 scrub ok
Nov 29 05:12:04 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 05:12:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 05:12:04 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 29 05:12:04 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 29 05:12:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:04 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 29 05:12:04 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 29 05:12:04 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 29 05:12:05 compute-0 ceph-mon[75176]: pgmap v179: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 05:12:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 05:12:05 compute-0 ceph-mon[75176]: osdmap e88: 3 total, 3 up, 3 in
Nov 29 05:12:05 compute-0 ceph-mon[75176]: 10.c scrub starts
Nov 29 05:12:05 compute-0 ceph-mon[75176]: 10.c scrub ok
Nov 29 05:12:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 114 B/s, 0 objects/s recovering
Nov 29 05:12:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 29 05:12:05 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 05:12:05 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 29 05:12:05 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 29 05:12:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 29 05:12:06 compute-0 ceph-mon[75176]: 8.8 scrub starts
Nov 29 05:12:06 compute-0 ceph-mon[75176]: 8.8 scrub ok
Nov 29 05:12:06 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 05:12:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 05:12:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 05:12:06 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 29 05:12:06 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.9 deep-scrub starts
Nov 29 05:12:06 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.9 deep-scrub ok
Nov 29 05:12:07 compute-0 ceph-mon[75176]: pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 114 B/s, 0 objects/s recovering
Nov 29 05:12:07 compute-0 ceph-mon[75176]: 8.a scrub starts
Nov 29 05:12:07 compute-0 ceph-mon[75176]: 8.a scrub ok
Nov 29 05:12:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 05:12:07 compute-0 ceph-mon[75176]: osdmap e89: 3 total, 3 up, 3 in
Nov 29 05:12:07 compute-0 ceph-mon[75176]: 10.9 deep-scrub starts
Nov 29 05:12:07 compute-0 ceph-mon[75176]: 10.9 deep-scrub ok
Nov 29 05:12:07 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 29 05:12:07 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 29 05:12:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Nov 29 05:12:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 05:12:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 05:12:07 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 29 05:12:07 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 29 05:12:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 29 05:12:08 compute-0 ceph-mon[75176]: 10.18 scrub starts
Nov 29 05:12:08 compute-0 ceph-mon[75176]: 10.18 scrub ok
Nov 29 05:12:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 05:12:08 compute-0 ceph-mon[75176]: 5.4 scrub starts
Nov 29 05:12:08 compute-0 ceph-mon[75176]: 5.4 scrub ok
Nov 29 05:12:08 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 05:12:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 05:12:08 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 29 05:12:08 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 29 05:12:08 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 29 05:12:08 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 29 05:12:08 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 29 05:12:09 compute-0 ceph-mon[75176]: pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 100 B/s, 0 objects/s recovering
Nov 29 05:12:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 05:12:09 compute-0 ceph-mon[75176]: osdmap e90: 3 total, 3 up, 3 in
Nov 29 05:12:09 compute-0 ceph-mon[75176]: 10.1b scrub starts
Nov 29 05:12:09 compute-0 ceph-mon[75176]: 10.1b scrub ok
Nov 29 05:12:09 compute-0 ceph-mon[75176]: 10.d scrub starts
Nov 29 05:12:09 compute-0 ceph-mon[75176]: 10.d scrub ok
Nov 29 05:12:09 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 90 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=90 pruub=10.151612282s) [2] r=-1 lpr=90 pi=[56,90)/1 crt=38'583 mlcod 0'0 active pruub 161.373535156s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:09 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 90 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=90 pruub=10.151553154s) [2] r=-1 lpr=90 pi=[56,90)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 161.373535156s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:09 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 90 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=90) [2] r=0 lpr=90 pi=[56,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 29 05:12:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 05:12:09 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 29 05:12:09 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 91 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=0 lpr=91 pi=[56,91)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:09 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 91 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=56/57 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=0 lpr=91 pi=[56,91)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:09 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=-1 lpr=91 pi=[56,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:09 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 91 pg[9.13( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] r=-1 lpr=91 pi=[56,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:09 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 29 05:12:09 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 29 05:12:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 115 B/s, 0 objects/s recovering
Nov 29 05:12:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 05:12:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 05:12:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 29 05:12:10 compute-0 ceph-mon[75176]: osdmap e91: 3 total, 3 up, 3 in
Nov 29 05:12:10 compute-0 ceph-mon[75176]: 10.1c scrub starts
Nov 29 05:12:10 compute-0 ceph-mon[75176]: 10.1c scrub ok
Nov 29 05:12:10 compute-0 ceph-mon[75176]: pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 115 B/s, 0 objects/s recovering
Nov 29 05:12:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 05:12:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 05:12:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 05:12:10 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 29 05:12:10 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 92 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=91/92 n=6 ec=47/32 lis/c=56/56 les/c/f=57/57/0 sis=91) [2]/[0] async=[2] r=0 lpr=91 pi=[56,91)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:10 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 29 05:12:10 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 29 05:12:10 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 29 05:12:10 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 29 05:12:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 29 05:12:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 05:12:11 compute-0 ceph-mon[75176]: osdmap e92: 3 total, 3 up, 3 in
Nov 29 05:12:11 compute-0 ceph-mon[75176]: 5.5 scrub starts
Nov 29 05:12:11 compute-0 ceph-mon[75176]: 5.5 scrub ok
Nov 29 05:12:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 05:12:11 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 29 05:12:11 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 93 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=91/92 n=6 ec=47/32 lis/c=91/56 les/c/f=92/57/0 sis=93 pruub=15.044969559s) [2] async=[2] r=-1 lpr=93 pi=[56,93)/1 crt=38'583 mlcod 38'583 active pruub 168.314895630s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:11 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 93 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=91/92 n=6 ec=47/32 lis/c=91/56 les/c/f=92/57/0 sis=93 pruub=15.044779778s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 168.314895630s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:11 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 93 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=91/56 les/c/f=92/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:11 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 93 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=91/56 les/c/f=92/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:11 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1d deep-scrub starts
Nov 29 05:12:11 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1d deep-scrub ok
Nov 29 05:12:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:12:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:12:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:12:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:12:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:12:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:12:11 compute-0 sshd-session[106401]: Accepted publickey for zuul from 192.168.122.30 port 52622 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:12:11 compute-0 systemd-logind[793]: New session 34 of user zuul.
Nov 29 05:12:11 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 29 05:12:11 compute-0 sshd-session[106401]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:12:11 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 29 05:12:11 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 29 05:12:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 29 05:12:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 05:12:12 compute-0 ceph-mon[75176]: 9.2 scrub starts
Nov 29 05:12:12 compute-0 ceph-mon[75176]: 9.2 scrub ok
Nov 29 05:12:12 compute-0 ceph-mon[75176]: osdmap e93: 3 total, 3 up, 3 in
Nov 29 05:12:12 compute-0 ceph-mon[75176]: 10.1d deep-scrub starts
Nov 29 05:12:12 compute-0 ceph-mon[75176]: 10.1d deep-scrub ok
Nov 29 05:12:12 compute-0 ceph-mon[75176]: pgmap v189: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:12 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 29 05:12:12 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 94 pg[9.13( v 38'583 (0'0,38'583] local-lis/les=93/94 n=6 ec=47/32 lis/c=91/56 les/c/f=92/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:12 compute-0 python3.9[106554]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 05:12:12 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 29 05:12:12 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 29 05:12:13 compute-0 ceph-mon[75176]: 8.13 scrub starts
Nov 29 05:12:13 compute-0 ceph-mon[75176]: 8.13 scrub ok
Nov 29 05:12:13 compute-0 ceph-mon[75176]: osdmap e94: 3 total, 3 up, 3 in
Nov 29 05:12:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:14 compute-0 python3.9[106728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:12:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:14 compute-0 ceph-mon[75176]: 8.16 scrub starts
Nov 29 05:12:14 compute-0 ceph-mon[75176]: 8.16 scrub ok
Nov 29 05:12:14 compute-0 ceph-mon[75176]: pgmap v191: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:14 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 29 05:12:14 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Nov 29 05:12:14 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 29 05:12:14 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Nov 29 05:12:14 compute-0 sudo[106882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vamnwhdtozolpfmbyxkopurkroownzrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393134.4487119-45-162816219528437/AnsiballZ_command.py'
Nov 29 05:12:14 compute-0 sudo[106882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:12:15 compute-0 python3.9[106884]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:12:15 compute-0 sudo[106882]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:15 compute-0 ceph-mon[75176]: 10.e scrub starts
Nov 29 05:12:15 compute-0 ceph-mon[75176]: 10.e scrub ok
Nov 29 05:12:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 05:12:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 05:12:15 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 05:12:15 compute-0 sudo[107035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbwlrzbbfhikaltaddavtnmiickkgtcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393135.480139-57-47300829051519/AnsiballZ_stat.py'
Nov 29 05:12:15 compute-0 sudo[107035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:12:16 compute-0 python3.9[107037]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:12:16 compute-0 sudo[107035]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 29 05:12:16 compute-0 ceph-mon[75176]: 8.17 deep-scrub starts
Nov 29 05:12:16 compute-0 ceph-mon[75176]: 8.17 deep-scrub ok
Nov 29 05:12:16 compute-0 ceph-mon[75176]: pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 05:12:16 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 05:12:16 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 05:12:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 05:12:16 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 29 05:12:16 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 95 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=95 pruub=10.042437553s) [1] r=-1 lpr=95 pi=[55,95)/1 crt=38'583 mlcod 0'0 active pruub 168.364135742s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:16 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 95 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=95 pruub=10.042379379s) [1] r=-1 lpr=95 pi=[55,95)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 168.364135742s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:16 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=95) [1] r=0 lpr=95 pi=[55,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:16 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.4 deep-scrub starts
Nov 29 05:12:16 compute-0 sudo[107189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itnkxzkajemovmyodcdebnqrecshjjai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393136.4690292-68-32472058345933/AnsiballZ_file.py'
Nov 29 05:12:16 compute-0 sudo[107189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:12:16 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.4 deep-scrub ok
Nov 29 05:12:17 compute-0 python3.9[107191]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:12:17 compute-0 sudo[107189]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 29 05:12:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 05:12:17 compute-0 ceph-mon[75176]: osdmap e95: 3 total, 3 up, 3 in
Nov 29 05:12:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 05:12:17 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 29 05:12:17 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[55,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:17 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 96 pg[9.15( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=-1 lpr=96 pi=[55,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:17 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 96 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=0 lpr=96 pi=[55,96)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:17 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 96 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] r=0 lpr=96 pi=[55,96)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 05:12:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 29 05:12:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 05:12:17 compute-0 sudo[107341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txnfrxtrwrcaxmoyjuslrpsozhzefwws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393137.4357834-77-139058596913735/AnsiballZ_file.py'
Nov 29 05:12:17 compute-0 sudo[107341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:12:17 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.1 deep-scrub starts
Nov 29 05:12:17 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.1 deep-scrub ok
Nov 29 05:12:18 compute-0 python3.9[107343]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:12:18 compute-0 sudo[107341]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 29 05:12:18 compute-0 ceph-mon[75176]: 9.4 deep-scrub starts
Nov 29 05:12:18 compute-0 ceph-mon[75176]: 9.4 deep-scrub ok
Nov 29 05:12:18 compute-0 ceph-mon[75176]: osdmap e96: 3 total, 3 up, 3 in
Nov 29 05:12:18 compute-0 ceph-mon[75176]: pgmap v195: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 05:12:18 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 05:12:18 compute-0 ceph-mon[75176]: 10.1 deep-scrub starts
Nov 29 05:12:18 compute-0 ceph-mon[75176]: 10.1 deep-scrub ok
Nov 29 05:12:18 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 05:12:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 05:12:18 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 29 05:12:18 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 97 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=97 pruub=15.896838188s) [0] r=-1 lpr=97 pi=[67,97)/1 crt=38'583 mlcod 0'0 active pruub 166.364501953s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:18 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 97 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=97 pruub=15.896558762s) [0] r=-1 lpr=97 pi=[67,97)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 166.364501953s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:18 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=97) [0] r=0 lpr=97 pi=[67,97)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:18 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 97 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=96/97 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=96) [1]/[0] async=[1] r=0 lpr=96 pi=[55,96)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:19 compute-0 python3.9[107493]: ansible-ansible.builtin.service_facts Invoked
Nov 29 05:12:19 compute-0 network[107510]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 05:12:19 compute-0 network[107511]: 'network-scripts' will be removed from distribution in near future.
Nov 29 05:12:19 compute-0 network[107512]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 05:12:19 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 29 05:12:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 29 05:12:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 05:12:19 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 29 05:12:19 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 29 05:12:19 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 98 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=98) [0]/[2] r=0 lpr=98 pi=[67,98)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:19 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 98 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=98) [0]/[2] r=0 lpr=98 pi=[67,98)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:19 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[67,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:19 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 98 pg[9.16( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=98) [0]/[2] r=-1 lpr=98 pi=[67,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:19 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 98 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=96/97 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98 pruub=15.741697311s) [1] async=[1] r=-1 lpr=98 pi=[55,98)/1 crt=38'583 mlcod 38'583 active pruub 176.988769531s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:19 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 98 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=96/97 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98 pruub=15.741625786s) [1] r=-1 lpr=98 pi=[55,98)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 176.988769531s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:19 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 98 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98) [1] r=0 lpr=98 pi=[55,98)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:19 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 98 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98) [1] r=0 lpr=98 pi=[55,98)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 05:12:19 compute-0 ceph-mon[75176]: osdmap e97: 3 total, 3 up, 3 in
Nov 29 05:12:19 compute-0 ceph-mon[75176]: osdmap e98: 3 total, 3 up, 3 in
Nov 29 05:12:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 05:12:20 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 29 05:12:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 29 05:12:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 05:12:20 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 29 05:12:20 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 29 05:12:20 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 99 pg[9.15( v 38'583 (0'0,38'583] local-lis/les=98/99 n=6 ec=47/32 lis/c=96/55 les/c/f=97/56/0 sis=98) [1] r=0 lpr=98 pi=[55,98)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:20 compute-0 ceph-mon[75176]: 10.1f scrub starts
Nov 29 05:12:20 compute-0 ceph-mon[75176]: 10.1f scrub ok
Nov 29 05:12:20 compute-0 ceph-mon[75176]: pgmap v198: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 05:12:20 compute-0 ceph-mon[75176]: osdmap e99: 3 total, 3 up, 3 in
Nov 29 05:12:20 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 29 05:12:20 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 99 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=98/99 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=98) [0]/[2] async=[0] r=0 lpr=98 pi=[67,98)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:20 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 29 05:12:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 29 05:12:21 compute-0 ceph-mon[75176]: 4.18 scrub starts
Nov 29 05:12:21 compute-0 ceph-mon[75176]: 4.18 scrub ok
Nov 29 05:12:21 compute-0 ceph-mon[75176]: 10.15 scrub starts
Nov 29 05:12:21 compute-0 ceph-mon[75176]: 10.15 scrub ok
Nov 29 05:12:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 29 05:12:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 05:12:21 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 29 05:12:21 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 100 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=98/67 les/c/f=99/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:21 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 100 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=98/67 les/c/f=99/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:21 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 100 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=98/99 n=6 ec=47/32 lis/c=98/67 les/c/f=99/68/0 sis=100 pruub=15.557282448s) [0] async=[0] r=-1 lpr=100 pi=[67,100)/1 crt=38'583 mlcod 38'583 active pruub 169.064346313s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:21 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 100 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=98/99 n=6 ec=47/32 lis/c=98/67 les/c/f=99/68/0 sis=100 pruub=15.557166100s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 169.064346313s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:22 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 29 05:12:22 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 29 05:12:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 29 05:12:22 compute-0 ceph-mon[75176]: pgmap v200: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Nov 29 05:12:22 compute-0 ceph-mon[75176]: osdmap e100: 3 total, 3 up, 3 in
Nov 29 05:12:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 05:12:22 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 29 05:12:22 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 101 pg[9.16( v 38'583 (0'0,38'583] local-lis/les=100/101 n=6 ec=47/32 lis/c=98/67 les/c/f=99/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:22 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.19 deep-scrub starts
Nov 29 05:12:22 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.19 deep-scrub ok
Nov 29 05:12:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Nov 29 05:12:23 compute-0 ceph-mon[75176]: 4.1b scrub starts
Nov 29 05:12:23 compute-0 ceph-mon[75176]: 4.1b scrub ok
Nov 29 05:12:23 compute-0 ceph-mon[75176]: osdmap e101: 3 total, 3 up, 3 in
Nov 29 05:12:23 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 29 05:12:23 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 29 05:12:23 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 29 05:12:23 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 29 05:12:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:24 compute-0 ceph-mon[75176]: 8.19 deep-scrub starts
Nov 29 05:12:24 compute-0 ceph-mon[75176]: 8.19 deep-scrub ok
Nov 29 05:12:24 compute-0 ceph-mon[75176]: pgmap v203: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Nov 29 05:12:24 compute-0 ceph-mon[75176]: 10.17 scrub starts
Nov 29 05:12:24 compute-0 ceph-mon[75176]: 10.17 scrub ok
Nov 29 05:12:24 compute-0 python3.9[107774]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:12:24 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 29 05:12:24 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 29 05:12:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 05:12:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 29 05:12:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 05:12:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 29 05:12:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 05:12:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 05:12:25 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 29 05:12:25 compute-0 ceph-mon[75176]: 8.1e scrub starts
Nov 29 05:12:25 compute-0 ceph-mon[75176]: 8.1e scrub ok
Nov 29 05:12:25 compute-0 ceph-mon[75176]: 2.1f scrub starts
Nov 29 05:12:25 compute-0 ceph-mon[75176]: 2.1f scrub ok
Nov 29 05:12:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 05:12:25 compute-0 python3.9[107924]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:12:26 compute-0 ceph-mon[75176]: pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 05:12:26 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 05:12:26 compute-0 ceph-mon[75176]: osdmap e102: 3 total, 3 up, 3 in
Nov 29 05:12:26 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 29 05:12:26 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 29 05:12:27 compute-0 python3.9[108078]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:12:27 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 29 05:12:27 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 29 05:12:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 05:12:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 29 05:12:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 05:12:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 29 05:12:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 05:12:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 05:12:27 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 29 05:12:27 compute-0 ceph-mon[75176]: 2.1d scrub starts
Nov 29 05:12:27 compute-0 ceph-mon[75176]: 2.1d scrub ok
Nov 29 05:12:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 05:12:27 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Nov 29 05:12:27 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Nov 29 05:12:28 compute-0 sudo[108234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnlosdwvdfrczbptqxsxpduqnprhfzig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393147.6414087-125-214087478400672/AnsiballZ_setup.py'
Nov 29 05:12:28 compute-0 sudo[108234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:12:28 compute-0 python3.9[108236]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:12:28 compute-0 ceph-mon[75176]: 4.1a scrub starts
Nov 29 05:12:28 compute-0 ceph-mon[75176]: 4.1a scrub ok
Nov 29 05:12:28 compute-0 ceph-mon[75176]: pgmap v206: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 05:12:28 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 05:12:28 compute-0 ceph-mon[75176]: osdmap e103: 3 total, 3 up, 3 in
Nov 29 05:12:28 compute-0 ceph-mon[75176]: 2.1c deep-scrub starts
Nov 29 05:12:28 compute-0 ceph-mon[75176]: 2.1c deep-scrub ok
Nov 29 05:12:28 compute-0 sudo[108234]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:28 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 29 05:12:28 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 29 05:12:29 compute-0 sudo[108318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slrdcuplzgdppqmjezblzbjmfwbnovdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393147.6414087-125-214087478400672/AnsiballZ_dnf.py'
Nov 29 05:12:29 compute-0 sudo[108318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:12:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:29 compute-0 python3.9[108320]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:12:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 29 05:12:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 29 05:12:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 05:12:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 29 05:12:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 05:12:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 05:12:29 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 29 05:12:29 compute-0 ceph-mon[75176]: 10.16 scrub starts
Nov 29 05:12:29 compute-0 ceph-mon[75176]: 10.16 scrub ok
Nov 29 05:12:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 05:12:29 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Nov 29 05:12:29 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Nov 29 05:12:30 compute-0 ceph-mon[75176]: pgmap v208: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Nov 29 05:12:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 05:12:30 compute-0 ceph-mon[75176]: osdmap e104: 3 total, 3 up, 3 in
Nov 29 05:12:30 compute-0 ceph-mon[75176]: 8.14 deep-scrub starts
Nov 29 05:12:30 compute-0 ceph-mon[75176]: 8.14 deep-scrub ok
Nov 29 05:12:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 104 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=104 pruub=11.739901543s) [2] r=-1 lpr=104 pi=[55,104)/1 crt=38'583 mlcod 0'0 active pruub 184.364822388s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:30 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 104 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=104 pruub=11.739595413s) [2] r=-1 lpr=104 pi=[55,104)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 184.364822388s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:30 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 104 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=104) [2] r=0 lpr=104 pi=[55,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:30 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 29 05:12:30 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 29 05:12:31 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 29 05:12:31 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 29 05:12:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 05:12:31 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 05:12:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 29 05:12:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 05:12:31 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 05:12:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 05:12:31 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 29 05:12:31 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 105 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=105) [2]/[0] r=-1 lpr=105 pi=[55,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:31 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 105 pg[9.19( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=105) [2]/[0] r=-1 lpr=105 pi=[55,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:31 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 105 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=105) [2]/[0] r=0 lpr=105 pi=[55,105)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:31 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 105 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=55/56 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=105) [2]/[0] r=0 lpr=105 pi=[55,105)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 29 05:12:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 05:12:32 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 29 05:12:32 compute-0 ceph-mon[75176]: 9.a scrub starts
Nov 29 05:12:32 compute-0 ceph-mon[75176]: 9.a scrub ok
Nov 29 05:12:32 compute-0 ceph-mon[75176]: 4.e scrub starts
Nov 29 05:12:32 compute-0 ceph-mon[75176]: 4.e scrub ok
Nov 29 05:12:32 compute-0 ceph-mon[75176]: pgmap v210: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 05:12:32 compute-0 ceph-mon[75176]: osdmap e105: 3 total, 3 up, 3 in
Nov 29 05:12:32 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 29 05:12:32 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 29 05:12:33 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 106 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=105/106 n=6 ec=47/32 lis/c=55/55 les/c/f=56/56/0 sis=105) [2]/[0] async=[2] r=0 lpr=105 pi=[55,105)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 05:12:33 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 05:12:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 29 05:12:33 compute-0 ceph-mon[75176]: osdmap e106: 3 total, 3 up, 3 in
Nov 29 05:12:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 05:12:33 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 05:12:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 05:12:33 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 29 05:12:33 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 107 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=105/106 n=6 ec=47/32 lis/c=105/55 les/c/f=106/56/0 sis=107 pruub=15.618248940s) [2] async=[2] r=-1 lpr=107 pi=[55,107)/1 crt=38'583 mlcod 38'583 active pruub 191.227386475s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:33 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 107 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=105/106 n=6 ec=47/32 lis/c=105/55 les/c/f=106/56/0 sis=107 pruub=15.618089676s) [2] r=-1 lpr=107 pi=[55,107)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 191.227386475s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:33 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 107 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=105/55 les/c/f=106/56/0 sis=107) [2] r=0 lpr=107 pi=[55,107)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:33 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 107 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=105/55 les/c/f=106/56/0 sis=107) [2] r=0 lpr=107 pi=[55,107)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:33 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 29 05:12:33 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 29 05:12:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 29 05:12:34 compute-0 ceph-mon[75176]: 9.10 scrub starts
Nov 29 05:12:34 compute-0 ceph-mon[75176]: 9.10 scrub ok
Nov 29 05:12:34 compute-0 ceph-mon[75176]: pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:34 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 05:12:34 compute-0 ceph-mon[75176]: osdmap e107: 3 total, 3 up, 3 in
Nov 29 05:12:34 compute-0 ceph-mon[75176]: 11.17 scrub starts
Nov 29 05:12:34 compute-0 ceph-mon[75176]: 11.17 scrub ok
Nov 29 05:12:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 05:12:34 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 29 05:12:34 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 108 pg[9.19( v 38'583 (0'0,38'583] local-lis/les=107/108 n=6 ec=47/32 lis/c=105/55 les/c/f=106/56/0 sis=107) [2] r=0 lpr=107 pi=[55,107)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Nov 29 05:12:35 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Nov 29 05:12:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 29 05:12:35 compute-0 ceph-mon[75176]: osdmap e108: 3 total, 3 up, 3 in
Nov 29 05:12:36 compute-0 ceph-mon[75176]: 9.12 deep-scrub starts
Nov 29 05:12:36 compute-0 ceph-mon[75176]: 9.12 deep-scrub ok
Nov 29 05:12:36 compute-0 ceph-mon[75176]: pgmap v216: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 29 05:12:36 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 29 05:12:36 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 29 05:12:36 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 29 05:12:37 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 29 05:12:37 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 29 05:12:37 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 29 05:12:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 05:12:37 compute-0 ceph-mon[75176]: 3.1b scrub starts
Nov 29 05:12:37 compute-0 ceph-mon[75176]: 3.1b scrub ok
Nov 29 05:12:37 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 29 05:12:37 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 29 05:12:38 compute-0 ceph-mon[75176]: 9.14 scrub starts
Nov 29 05:12:38 compute-0 ceph-mon[75176]: 9.14 scrub ok
Nov 29 05:12:38 compute-0 ceph-mon[75176]: 4.a scrub starts
Nov 29 05:12:38 compute-0 ceph-mon[75176]: 4.a scrub ok
Nov 29 05:12:38 compute-0 ceph-mon[75176]: pgmap v217: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 05:12:38 compute-0 ceph-mon[75176]: 11.14 scrub starts
Nov 29 05:12:38 compute-0 ceph-mon[75176]: 11.14 scrub ok
Nov 29 05:12:39 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 29 05:12:39 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 29 05:12:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:39 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 29 05:12:39 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 29 05:12:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Nov 29 05:12:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 29 05:12:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 29 05:12:40 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 29 05:12:40 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 29 05:12:40 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 29 05:12:40 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 29 05:12:40 compute-0 ceph-mon[75176]: 9.1a scrub starts
Nov 29 05:12:40 compute-0 ceph-mon[75176]: 9.1a scrub ok
Nov 29 05:12:40 compute-0 ceph-mon[75176]: 4.13 scrub starts
Nov 29 05:12:40 compute-0 ceph-mon[75176]: 4.13 scrub ok
Nov 29 05:12:40 compute-0 ceph-mon[75176]: pgmap v218: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Nov 29 05:12:40 compute-0 ceph-mon[75176]: 7.1b scrub starts
Nov 29 05:12:40 compute-0 ceph-mon[75176]: 7.1b scrub ok
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:12:41
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 05:12:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 05:12:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:12:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:12:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 29 05:12:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 05:12:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 05:12:41 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 29 05:12:41 compute-0 ceph-mon[75176]: 11.5 scrub starts
Nov 29 05:12:41 compute-0 ceph-mon[75176]: 11.5 scrub ok
Nov 29 05:12:41 compute-0 ceph-mon[75176]: 4.11 scrub starts
Nov 29 05:12:41 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 109 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=6 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=109 pruub=14.298078537s) [0] r=-1 lpr=109 pi=[80,109)/1 crt=38'583 mlcod 0'0 active pruub 188.080856323s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:41 compute-0 ceph-mon[75176]: 4.11 scrub ok
Nov 29 05:12:41 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 109 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=6 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=109 pruub=14.298032761s) [0] r=-1 lpr=109 pi=[80,109)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 188.080856323s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 05:12:41 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=109) [0] r=0 lpr=109 pi=[80,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:41 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 29 05:12:41 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 29 05:12:42 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 29 05:12:42 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 29 05:12:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 29 05:12:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 05:12:42 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 29 05:12:42 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 110 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[80,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:42 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 110 pg[9.1c( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[80,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:42 compute-0 ceph-mon[75176]: pgmap v219: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 05:12:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 05:12:42 compute-0 ceph-mon[75176]: osdmap e109: 3 total, 3 up, 3 in
Nov 29 05:12:42 compute-0 ceph-mon[75176]: 8.10 scrub starts
Nov 29 05:12:42 compute-0 ceph-mon[75176]: 8.10 scrub ok
Nov 29 05:12:42 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 110 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=6 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=110) [0]/[2] r=0 lpr=110 pi=[80,110)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:42 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 110 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=80/81 n=6 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=110) [0]/[2] r=0 lpr=110 pi=[80,110)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:43 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 29 05:12:43 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 29 05:12:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 05:12:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 05:12:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 29 05:12:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 05:12:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 05:12:43 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 29 05:12:43 compute-0 ceph-mon[75176]: 4.1c scrub starts
Nov 29 05:12:43 compute-0 ceph-mon[75176]: 4.1c scrub ok
Nov 29 05:12:43 compute-0 ceph-mon[75176]: osdmap e110: 3 total, 3 up, 3 in
Nov 29 05:12:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 05:12:43 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 111 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=110/111 n=6 ec=47/32 lis/c=80/80 les/c/f=81/81/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[80,110)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:44 compute-0 sshd-session[108416]: Received disconnect from 80.94.93.233 port 56126:11:  [preauth]
Nov 29 05:12:44 compute-0 sshd-session[108416]: Disconnected from authenticating user root 80.94.93.233 port 56126 [preauth]
Nov 29 05:12:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 29 05:12:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 05:12:44 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 29 05:12:44 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 112 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=110/111 n=6 ec=47/32 lis/c=110/80 les/c/f=111/81/0 sis=112 pruub=15.749832153s) [0] async=[0] r=-1 lpr=112 pi=[80,112)/1 crt=38'583 mlcod 38'583 active pruub 192.114669800s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:44 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 112 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=110/111 n=6 ec=47/32 lis/c=110/80 les/c/f=111/81/0 sis=112 pruub=15.749622345s) [0] r=-1 lpr=112 pi=[80,112)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 192.114669800s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:44 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 112 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=110/80 les/c/f=111/81/0 sis=112) [0] r=0 lpr=112 pi=[80,112)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:44 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 112 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=110/80 les/c/f=111/81/0 sis=112) [0] r=0 lpr=112 pi=[80,112)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:44 compute-0 ceph-mon[75176]: 11.7 scrub starts
Nov 29 05:12:44 compute-0 ceph-mon[75176]: 11.7 scrub ok
Nov 29 05:12:44 compute-0 ceph-mon[75176]: pgmap v222: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 05:12:44 compute-0 ceph-mon[75176]: osdmap e111: 3 total, 3 up, 3 in
Nov 29 05:12:44 compute-0 ceph-mon[75176]: osdmap e112: 3 total, 3 up, 3 in
Nov 29 05:12:45 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 29 05:12:45 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 29 05:12:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 29 05:12:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 05:12:45 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 29 05:12:45 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 113 pg[9.1c( v 38'583 (0'0,38'583] local-lis/les=112/113 n=6 ec=47/32 lis/c=110/80 les/c/f=111/81/0 sis=112) [0] r=0 lpr=112 pi=[80,112)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Nov 29 05:12:46 compute-0 ceph-mon[75176]: 11.a scrub starts
Nov 29 05:12:46 compute-0 ceph-mon[75176]: 11.a scrub ok
Nov 29 05:12:46 compute-0 ceph-mon[75176]: osdmap e113: 3 total, 3 up, 3 in
Nov 29 05:12:46 compute-0 ceph-mon[75176]: pgmap v226: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Nov 29 05:12:46 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Nov 29 05:12:46 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Nov 29 05:12:47 compute-0 ceph-mon[75176]: 7.1f deep-scrub starts
Nov 29 05:12:47 compute-0 ceph-mon[75176]: 7.1f deep-scrub ok
Nov 29 05:12:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Nov 29 05:12:47 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 29 05:12:48 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 29 05:12:48 compute-0 ceph-mon[75176]: pgmap v227: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Nov 29 05:12:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:49 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 29 05:12:49 compute-0 ceph-mon[75176]: 3.1f scrub starts
Nov 29 05:12:49 compute-0 ceph-mon[75176]: 3.1f scrub ok
Nov 29 05:12:49 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 29 05:12:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 05:12:50 compute-0 ceph-mon[75176]: 11.c scrub starts
Nov 29 05:12:50 compute-0 ceph-mon[75176]: 11.c scrub ok
Nov 29 05:12:50 compute-0 ceph-mon[75176]: pgmap v228: 305 pgs: 1 peering, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 05:12:50 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 29 05:12:50 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:12:51 compute-0 ceph-mon[75176]: 7.1a scrub starts
Nov 29 05:12:51 compute-0 ceph-mon[75176]: 7.1a scrub ok
Nov 29 05:12:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 29 05:12:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 29 05:12:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 05:12:52 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Nov 29 05:12:52 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Nov 29 05:12:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 29 05:12:52 compute-0 ceph-mon[75176]: pgmap v229: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 29 05:12:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 05:12:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 05:12:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 05:12:52 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 29 05:12:52 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 114 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=114 pruub=13.921597481s) [0] r=-1 lpr=114 pi=[67,114)/1 crt=38'583 mlcod 0'0 active pruub 198.359375000s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:52 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 114 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=114 pruub=13.921504021s) [0] r=-1 lpr=114 pi=[67,114)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 198.359375000s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:52 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 114 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=114) [0] r=0 lpr=114 pi=[67,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 29 05:12:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 05:12:53 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 29 05:12:53 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[67,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:53 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 115 pg[9.1e( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[67,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:53 compute-0 ceph-mon[75176]: 11.13 scrub starts
Nov 29 05:12:53 compute-0 ceph-mon[75176]: 11.13 scrub ok
Nov 29 05:12:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 05:12:53 compute-0 ceph-mon[75176]: osdmap e114: 3 total, 3 up, 3 in
Nov 29 05:12:53 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 115 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=115) [0]/[2] r=0 lpr=115 pi=[67,115)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:53 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 115 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=67/68 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=115) [0]/[2] r=0 lpr=115 pi=[67,115)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 05:12:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:12:54 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 29 05:12:54 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 29 05:12:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 29 05:12:54 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:12:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 05:12:54 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 29 05:12:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 116 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=116 pruub=12.907160759s) [1] r=-1 lpr=116 pi=[68,116)/1 crt=38'583 mlcod 0'0 active pruub 199.364028931s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 116 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=116 pruub=12.906765938s) [1] r=-1 lpr=116 pi=[68,116)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 199.364028931s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:54 compute-0 ceph-mon[75176]: osdmap e115: 3 total, 3 up, 3 in
Nov 29 05:12:54 compute-0 ceph-mon[75176]: pgmap v232: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:12:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 05:12:54 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=116) [1] r=0 lpr=116 pi=[68,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:54 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 116 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=115/116 n=6 ec=47/32 lis/c=67/67 les/c/f=68/68/0 sis=115) [0]/[2] async=[0] r=0 lpr=115 pi=[67,115)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:55 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 29 05:12:55 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 29 05:12:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 29 05:12:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 05:12:55 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 29 05:12:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 117 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] r=0 lpr=117 pi=[68,117)/1 crt=38'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 117 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=68/69 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] r=0 lpr=117 pi=[68,117)/1 crt=38'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 117 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=115/116 n=6 ec=47/32 lis/c=115/67 les/c/f=116/68/0 sis=117 pruub=15.401042938s) [0] async=[0] r=-1 lpr=117 pi=[67,117)/1 crt=38'583 mlcod 38'583 active pruub 202.881149292s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:55 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 117 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=115/116 n=6 ec=47/32 lis/c=115/67 les/c/f=116/68/0 sis=117 pruub=15.400755882s) [0] r=-1 lpr=117 pi=[67,117)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 202.881149292s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:55 compute-0 ceph-mon[75176]: 11.10 scrub starts
Nov 29 05:12:55 compute-0 ceph-mon[75176]: 11.10 scrub ok
Nov 29 05:12:55 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 05:12:55 compute-0 ceph-mon[75176]: osdmap e116: 3 total, 3 up, 3 in
Nov 29 05:12:55 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[68,117)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:55 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 117 pg[9.1f( empty local-lis/les=0/0 n=0 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] r=-1 lpr=117 pi=[68,117)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:55 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 117 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=115/67 les/c/f=116/68/0 sis=117) [0] r=0 lpr=117 pi=[67,117)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:55 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 117 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=115/67 les/c/f=116/68/0 sis=117) [0] r=0 lpr=117 pi=[67,117)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%)
Nov 29 05:12:55 compute-0 sshd-session[108461]: Connection closed by 101.47.141.125 port 58790 [preauth]
Nov 29 05:12:56 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 29 05:12:56 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 29 05:12:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 29 05:12:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 05:12:56 compute-0 ceph-mon[75176]: 7.3 scrub starts
Nov 29 05:12:56 compute-0 ceph-mon[75176]: 7.3 scrub ok
Nov 29 05:12:56 compute-0 ceph-mon[75176]: osdmap e117: 3 total, 3 up, 3 in
Nov 29 05:12:56 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 29 05:12:56 compute-0 ceph-osd[89151]: osd.0 pg_epoch: 118 pg[9.1e( v 38'583 (0'0,38'583] local-lis/les=117/118 n=6 ec=47/32 lis/c=115/67 les/c/f=116/68/0 sis=117) [0] r=0 lpr=117 pi=[67,117)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:57 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 118 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=117/118 n=6 ec=47/32 lis/c=68/68 les/c/f=69/69/0 sis=117) [1]/[2] async=[1] r=0 lpr=117 pi=[68,117)/1 crt=38'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%)
Nov 29 05:12:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 29 05:12:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 05:12:57 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 29 05:12:57 compute-0 ceph-mon[75176]: pgmap v235: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%)
Nov 29 05:12:57 compute-0 ceph-mon[75176]: 11.16 scrub starts
Nov 29 05:12:57 compute-0 ceph-mon[75176]: 11.16 scrub ok
Nov 29 05:12:57 compute-0 ceph-mon[75176]: osdmap e118: 3 total, 3 up, 3 in
Nov 29 05:12:57 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 119 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=117/118 n=6 ec=47/32 lis/c=117/68 les/c/f=118/69/0 sis=119 pruub=15.632976532s) [1] async=[1] r=-1 lpr=119 pi=[68,119)/1 crt=38'583 mlcod 38'583 active pruub 205.383071899s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:57 compute-0 ceph-osd[91343]: osd.2 pg_epoch: 119 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=117/118 n=6 ec=47/32 lis/c=117/68 les/c/f=118/69/0 sis=119 pruub=15.632826805s) [1] r=-1 lpr=119 pi=[68,119)/1 crt=38'583 mlcod 0'0 unknown NOTIFY pruub 205.383071899s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 05:12:57 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 119 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=117/68 les/c/f=118/69/0 sis=119) [1] r=0 lpr=119 pi=[68,119)/1 luod=0'0 crt=38'583 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 05:12:57 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 119 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=0/0 n=6 ec=47/32 lis/c=117/68 les/c/f=118/69/0 sis=119) [1] r=0 lpr=119 pi=[68,119)/1 crt=38'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 05:12:58 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.f deep-scrub starts
Nov 29 05:12:58 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.f deep-scrub ok
Nov 29 05:12:58 compute-0 sudo[108468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:12:58 compute-0 sudo[108468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:12:58 compute-0 sudo[108468]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:58 compute-0 sudo[108493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:12:58 compute-0 sudo[108493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:12:58 compute-0 sudo[108493]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:58 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 29 05:12:58 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 29 05:12:58 compute-0 sudo[108518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:12:58 compute-0 sudo[108518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:12:58 compute-0 sudo[108518]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 29 05:12:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 05:12:58 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 29 05:12:58 compute-0 ceph-mon[75176]: pgmap v237: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%)
Nov 29 05:12:58 compute-0 ceph-mon[75176]: osdmap e119: 3 total, 3 up, 3 in
Nov 29 05:12:58 compute-0 sudo[108543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:12:58 compute-0 sudo[108543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:12:58 compute-0 ceph-osd[90181]: osd.1 pg_epoch: 120 pg[9.1f( v 38'583 (0'0,38'583] local-lis/les=119/120 n=6 ec=47/32 lis/c=117/68 les/c/f=118/69/0 sis=119) [1] r=0 lpr=119 pi=[68,119)/1 crt=38'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 05:12:59 compute-0 sudo[108543]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:12:59 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:12:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:12:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:12:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:12:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:12:59 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 593b6ef3-479b-45bf-be54-54f51eadb4ff does not exist
Nov 29 05:12:59 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 01084ed7-be77-4add-80b7-c683970055d0 does not exist
Nov 29 05:12:59 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 720b8e6c-eb94-49ad-b9fa-9e6b0a315aff does not exist
Nov 29 05:12:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:12:59 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:12:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:12:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:12:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:12:59 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:12:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:12:59 compute-0 sudo[108598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:12:59 compute-0 sudo[108598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:12:59 compute-0 sudo[108598]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:59 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 29 05:12:59 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 29 05:12:59 compute-0 sudo[108623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:12:59 compute-0 sudo[108623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:12:59 compute-0 sudo[108623]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:59 compute-0 sudo[108648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:12:59 compute-0 sudo[108648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:12:59 compute-0 sudo[108648]: pam_unix(sudo:session): session closed for user root
Nov 29 05:12:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%); 27 B/s, 1 objects/s recovering
Nov 29 05:12:59 compute-0 sudo[108673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:12:59 compute-0 sudo[108673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:12:59 compute-0 ceph-mon[75176]: 11.f deep-scrub starts
Nov 29 05:12:59 compute-0 ceph-mon[75176]: 11.f deep-scrub ok
Nov 29 05:12:59 compute-0 ceph-mon[75176]: 3.1e scrub starts
Nov 29 05:12:59 compute-0 ceph-mon[75176]: 3.1e scrub ok
Nov 29 05:12:59 compute-0 ceph-mon[75176]: osdmap e120: 3 total, 3 up, 3 in
Nov 29 05:12:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:12:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:12:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:12:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:12:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:12:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:12:59 compute-0 podman[108738]: 2025-11-29 05:12:59.678396611 +0000 UTC m=+0.048318543 container create cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:12:59 compute-0 systemd[1]: Started libpod-conmon-cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba.scope.
Nov 29 05:12:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:12:59 compute-0 podman[108738]: 2025-11-29 05:12:59.650885455 +0000 UTC m=+0.020807417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:12:59 compute-0 podman[108738]: 2025-11-29 05:12:59.758313781 +0000 UTC m=+0.128235733 container init cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 05:12:59 compute-0 podman[108738]: 2025-11-29 05:12:59.764923757 +0000 UTC m=+0.134845689 container start cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:12:59 compute-0 podman[108738]: 2025-11-29 05:12:59.768298213 +0000 UTC m=+0.138220165 container attach cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:12:59 compute-0 adoring_haibt[108755]: 167 167
Nov 29 05:12:59 compute-0 systemd[1]: libpod-cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba.scope: Deactivated successfully.
Nov 29 05:12:59 compute-0 podman[108738]: 2025-11-29 05:12:59.771070853 +0000 UTC m=+0.140992785 container died cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:12:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bea31166212c86de9f55b8feeeb892f6c3430950f2a9e9231b6fc3709225b9c-merged.mount: Deactivated successfully.
Nov 29 05:12:59 compute-0 podman[108738]: 2025-11-29 05:12:59.82554064 +0000 UTC m=+0.195462572 container remove cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:12:59 compute-0 systemd[1]: libpod-conmon-cfac02b5daaa9a712ee98feb8373558dc12fdf960f90c2ec8d07fa2c2e4c7dba.scope: Deactivated successfully.
Nov 29 05:12:59 compute-0 podman[108779]: 2025-11-29 05:12:59.975467279 +0000 UTC m=+0.039749946 container create 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 05:13:00 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 29 05:13:00 compute-0 systemd[1]: Started libpod-conmon-71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2.scope.
Nov 29 05:13:00 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 29 05:13:00 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:00 compute-0 podman[108779]: 2025-11-29 05:13:00.044127215 +0000 UTC m=+0.108409912 container init 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 29 05:13:00 compute-0 podman[108779]: 2025-11-29 05:12:59.956285494 +0000 UTC m=+0.020568181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:13:00 compute-0 podman[108779]: 2025-11-29 05:13:00.053846351 +0000 UTC m=+0.118129008 container start 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:13:00 compute-0 podman[108779]: 2025-11-29 05:13:00.070300196 +0000 UTC m=+0.134582893 container attach 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:13:00 compute-0 ceph-mon[75176]: 11.1d scrub starts
Nov 29 05:13:00 compute-0 ceph-mon[75176]: 11.1d scrub ok
Nov 29 05:13:00 compute-0 ceph-mon[75176]: pgmap v240: 305 pgs: 1 activating+remapped, 304 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%); 27 B/s, 1 objects/s recovering
Nov 29 05:13:01 compute-0 sad_margulis[108796]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:13:01 compute-0 sad_margulis[108796]: --> relative data size: 1.0
Nov 29 05:13:01 compute-0 sad_margulis[108796]: --> All data devices are unavailable
Nov 29 05:13:01 compute-0 systemd[1]: libpod-71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2.scope: Deactivated successfully.
Nov 29 05:13:01 compute-0 systemd[1]: libpod-71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2.scope: Consumed 1.061s CPU time.
Nov 29 05:13:01 compute-0 podman[108779]: 2025-11-29 05:13:01.175607605 +0000 UTC m=+1.239890302 container died 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:13:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e3e2c533f01bcd235a64527628f85d7e3473906b0f2652abe8a657abc46b590-merged.mount: Deactivated successfully.
Nov 29 05:13:01 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 29 05:13:01 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 29 05:13:01 compute-0 podman[108779]: 2025-11-29 05:13:01.284709812 +0000 UTC m=+1.348992489 container remove 71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_margulis, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:13:01 compute-0 systemd[1]: libpod-conmon-71cf239e9515a17390a56ded49e942e89642853dcc2c8096c7938396ae1b20e2.scope: Deactivated successfully.
Nov 29 05:13:01 compute-0 sudo[108673]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 05:13:01 compute-0 sudo[108839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:13:01 compute-0 sudo[108839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:13:01 compute-0 sudo[108839]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:01 compute-0 sudo[108864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:13:01 compute-0 sudo[108864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:13:01 compute-0 sudo[108864]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:01 compute-0 sudo[108889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:13:01 compute-0 sudo[108889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:13:01 compute-0 sudo[108889]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:01 compute-0 sudo[108914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:13:01 compute-0 sudo[108914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:13:01 compute-0 ceph-mon[75176]: 8.c scrub starts
Nov 29 05:13:01 compute-0 ceph-mon[75176]: 8.c scrub ok
Nov 29 05:13:02 compute-0 podman[108979]: 2025-11-29 05:13:02.015638888 +0000 UTC m=+0.067839857 container create f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 05:13:02 compute-0 systemd[1]: Started libpod-conmon-f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c.scope.
Nov 29 05:13:02 compute-0 podman[108979]: 2025-11-29 05:13:01.987209138 +0000 UTC m=+0.039410157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:13:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:13:02 compute-0 podman[108979]: 2025-11-29 05:13:02.124949111 +0000 UTC m=+0.177150120 container init f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:13:02 compute-0 podman[108979]: 2025-11-29 05:13:02.133487516 +0000 UTC m=+0.185688475 container start f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:13:02 compute-0 podman[108979]: 2025-11-29 05:13:02.137251252 +0000 UTC m=+0.189452221 container attach f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:13:02 compute-0 unruffled_engelbart[108995]: 167 167
Nov 29 05:13:02 compute-0 systemd[1]: libpod-f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c.scope: Deactivated successfully.
Nov 29 05:13:02 compute-0 podman[108979]: 2025-11-29 05:13:02.145007467 +0000 UTC m=+0.197208426 container died f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:13:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-523a7020b3a3a2eaa5dbf5086d2aa6461ae6ec80058d46bdc3e62e844999f233-merged.mount: Deactivated successfully.
Nov 29 05:13:02 compute-0 podman[108979]: 2025-11-29 05:13:02.198744246 +0000 UTC m=+0.250945195 container remove f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:13:02 compute-0 systemd[1]: libpod-conmon-f7c053797c3db61593a102e39c5e5b0018d57276b3b56cbe82f1333810a5a03c.scope: Deactivated successfully.
Nov 29 05:13:02 compute-0 podman[109019]: 2025-11-29 05:13:02.434857654 +0000 UTC m=+0.066630865 container create 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 05:13:02 compute-0 systemd[1]: Started libpod-conmon-0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35.scope.
Nov 29 05:13:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b20f608e4ab87ebc416d1e3b6baa6acb446b1e65d66ccad6b663e05479b944/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:02 compute-0 podman[109019]: 2025-11-29 05:13:02.412701444 +0000 UTC m=+0.044474655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b20f608e4ab87ebc416d1e3b6baa6acb446b1e65d66ccad6b663e05479b944/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b20f608e4ab87ebc416d1e3b6baa6acb446b1e65d66ccad6b663e05479b944/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b20f608e4ab87ebc416d1e3b6baa6acb446b1e65d66ccad6b663e05479b944/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:02 compute-0 podman[109019]: 2025-11-29 05:13:02.522687514 +0000 UTC m=+0.154460715 container init 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:13:02 compute-0 podman[109019]: 2025-11-29 05:13:02.533431746 +0000 UTC m=+0.165204927 container start 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:13:02 compute-0 podman[109019]: 2025-11-29 05:13:02.53634286 +0000 UTC m=+0.168116081 container attach 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:13:02 compute-0 ceph-mon[75176]: 5.11 scrub starts
Nov 29 05:13:02 compute-0 ceph-mon[75176]: 5.11 scrub ok
Nov 29 05:13:02 compute-0 ceph-mon[75176]: pgmap v241: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 05:13:03 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 29 05:13:03 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 29 05:13:03 compute-0 hungry_swirles[109035]: {
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:     "0": [
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:         {
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "devices": [
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "/dev/loop3"
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             ],
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_name": "ceph_lv0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_size": "21470642176",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "name": "ceph_lv0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "tags": {
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.cluster_name": "ceph",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.crush_device_class": "",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.encrypted": "0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.osd_id": "0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.type": "block",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.vdo": "0"
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             },
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "type": "block",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "vg_name": "ceph_vg0"
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:         }
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:     ],
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:     "1": [
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:         {
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "devices": [
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "/dev/loop4"
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             ],
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_name": "ceph_lv1",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_size": "21470642176",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "name": "ceph_lv1",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "tags": {
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.cluster_name": "ceph",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.crush_device_class": "",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.encrypted": "0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.osd_id": "1",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.type": "block",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.vdo": "0"
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             },
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "type": "block",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "vg_name": "ceph_vg1"
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:         }
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:     ],
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:     "2": [
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:         {
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "devices": [
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "/dev/loop5"
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             ],
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_name": "ceph_lv2",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_size": "21470642176",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "name": "ceph_lv2",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "tags": {
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.cluster_name": "ceph",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.crush_device_class": "",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.encrypted": "0",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.osd_id": "2",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.type": "block",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:                 "ceph.vdo": "0"
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             },
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "type": "block",
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:             "vg_name": "ceph_vg2"
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:         }
Nov 29 05:13:03 compute-0 hungry_swirles[109035]:     ]
Nov 29 05:13:03 compute-0 hungry_swirles[109035]: }
Nov 29 05:13:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Nov 29 05:13:03 compute-0 systemd[1]: libpod-0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35.scope: Deactivated successfully.
Nov 29 05:13:03 compute-0 podman[109019]: 2025-11-29 05:13:03.345086281 +0000 UTC m=+0.976859462 container died 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 05:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-05b20f608e4ab87ebc416d1e3b6baa6acb446b1e65d66ccad6b663e05479b944-merged.mount: Deactivated successfully.
Nov 29 05:13:03 compute-0 podman[109019]: 2025-11-29 05:13:03.404599426 +0000 UTC m=+1.036372597 container remove 0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:13:03 compute-0 systemd[1]: libpod-conmon-0376264223ef3ff94a0d5abaad3b5e3686426819f7716b3df5a727e25b40fe35.scope: Deactivated successfully.
Nov 29 05:13:03 compute-0 sudo[108914]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:03 compute-0 sudo[109055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:13:03 compute-0 sudo[109055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:13:03 compute-0 sudo[109055]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:03 compute-0 sudo[109080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:13:03 compute-0 sudo[109080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:13:03 compute-0 sudo[109080]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:03 compute-0 sudo[109105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:13:03 compute-0 sudo[109105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:13:03 compute-0 sudo[109105]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:03 compute-0 sudo[109130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:13:03 compute-0 sudo[109130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:13:04 compute-0 podman[109193]: 2025-11-29 05:13:04.059963401 +0000 UTC m=+0.058207793 container create 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 05:13:04 compute-0 systemd[1]: Started libpod-conmon-069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810.scope.
Nov 29 05:13:04 compute-0 podman[109193]: 2025-11-29 05:13:04.028039924 +0000 UTC m=+0.026284406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:13:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:13:04 compute-0 podman[109193]: 2025-11-29 05:13:04.143081412 +0000 UTC m=+0.141325784 container init 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:13:04 compute-0 podman[109193]: 2025-11-29 05:13:04.149031702 +0000 UTC m=+0.147276094 container start 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:13:04 compute-0 podman[109193]: 2025-11-29 05:13:04.153313281 +0000 UTC m=+0.151557753 container attach 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:13:04 compute-0 pedantic_yonath[109210]: 167 167
Nov 29 05:13:04 compute-0 systemd[1]: libpod-069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810.scope: Deactivated successfully.
Nov 29 05:13:04 compute-0 podman[109193]: 2025-11-29 05:13:04.155077685 +0000 UTC m=+0.153322057 container died 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:13:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ea431d17ae595ae58f43bafea93c752e3ba1652781f1691396e4903e6e60623-merged.mount: Deactivated successfully.
Nov 29 05:13:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:04 compute-0 podman[109193]: 2025-11-29 05:13:04.195502787 +0000 UTC m=+0.193747189 container remove 069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_yonath, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:13:04 compute-0 systemd[1]: libpod-conmon-069039e53d6007c02a0ed395d1a6338ff398129d004ea15f762d971e52c11810.scope: Deactivated successfully.
Nov 29 05:13:04 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 29 05:13:04 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 29 05:13:04 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 29 05:13:04 compute-0 podman[109234]: 2025-11-29 05:13:04.359881682 +0000 UTC m=+0.039477249 container create ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:13:04 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 29 05:13:04 compute-0 systemd[1]: Started libpod-conmon-ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671.scope.
Nov 29 05:13:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51c68c932025a75c9b31b757f3dcd6b65a186ee166a281725708b8f3989c7b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51c68c932025a75c9b31b757f3dcd6b65a186ee166a281725708b8f3989c7b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51c68c932025a75c9b31b757f3dcd6b65a186ee166a281725708b8f3989c7b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51c68c932025a75c9b31b757f3dcd6b65a186ee166a281725708b8f3989c7b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:13:04 compute-0 podman[109234]: 2025-11-29 05:13:04.427342327 +0000 UTC m=+0.106937914 container init ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:13:04 compute-0 podman[109234]: 2025-11-29 05:13:04.433697118 +0000 UTC m=+0.113292685 container start ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 05:13:04 compute-0 podman[109234]: 2025-11-29 05:13:04.437706289 +0000 UTC m=+0.117301876 container attach ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:13:04 compute-0 podman[109234]: 2025-11-29 05:13:04.344834881 +0000 UTC m=+0.024430468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:13:04 compute-0 ceph-mon[75176]: 3.6 scrub starts
Nov 29 05:13:04 compute-0 ceph-mon[75176]: 3.6 scrub ok
Nov 29 05:13:04 compute-0 ceph-mon[75176]: pgmap v242: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 1 objects/s recovering
Nov 29 05:13:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]: {
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "osd_id": 0,
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "type": "bluestore"
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:     },
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "osd_id": 1,
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "type": "bluestore"
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:     },
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "osd_id": 2,
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:         "type": "bluestore"
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]:     }
Nov 29 05:13:05 compute-0 frosty_heisenberg[109250]: }
Nov 29 05:13:05 compute-0 systemd[1]: libpod-ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671.scope: Deactivated successfully.
Nov 29 05:13:05 compute-0 podman[109234]: 2025-11-29 05:13:05.422394199 +0000 UTC m=+1.101989776 container died ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e51c68c932025a75c9b31b757f3dcd6b65a186ee166a281725708b8f3989c7b2-merged.mount: Deactivated successfully.
Nov 29 05:13:05 compute-0 podman[109234]: 2025-11-29 05:13:05.470430492 +0000 UTC m=+1.150026059 container remove ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:13:05 compute-0 systemd[1]: libpod-conmon-ec44739afcd33ea2333d5bfdce82d6a15c2d306b714c1b5d7efb66b8c08c0671.scope: Deactivated successfully.
Nov 29 05:13:05 compute-0 sudo[109130]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:13:05 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:13:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:13:05 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:13:05 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 148926dd-8c6c-4325-a946-db354d846842 does not exist
Nov 29 05:13:05 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ac049d9a-2e0d-496a-8c68-0587320fa4e0 does not exist
Nov 29 05:13:05 compute-0 sudo[109297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:13:05 compute-0 sudo[109297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:13:05 compute-0 sudo[109297]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:05 compute-0 sudo[109322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:13:05 compute-0 sudo[109322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:13:05 compute-0 sudo[109322]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:05 compute-0 ceph-mon[75176]: 2.17 scrub starts
Nov 29 05:13:05 compute-0 ceph-mon[75176]: 2.17 scrub ok
Nov 29 05:13:05 compute-0 ceph-mon[75176]: 11.15 scrub starts
Nov 29 05:13:05 compute-0 ceph-mon[75176]: 11.15 scrub ok
Nov 29 05:13:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:13:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:13:06 compute-0 ceph-mon[75176]: pgmap v243: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 05:13:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 1 objects/s recovering
Nov 29 05:13:08 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 29 05:13:08 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 29 05:13:08 compute-0 sudo[108318]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:08 compute-0 ceph-mon[75176]: pgmap v244: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 1 objects/s recovering
Nov 29 05:13:09 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Nov 29 05:13:09 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Nov 29 05:13:09 compute-0 sudo[109496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpsdfcvwmhvtxdgwbmnkintpaqrnzqze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393188.8186178-137-40997459074286/AnsiballZ_command.py'
Nov 29 05:13:09 compute-0 sudo[109496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 1 objects/s recovering
Nov 29 05:13:09 compute-0 python3.9[109498]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:13:09 compute-0 ceph-mon[75176]: 8.e scrub starts
Nov 29 05:13:09 compute-0 ceph-mon[75176]: 8.e scrub ok
Nov 29 05:13:10 compute-0 sudo[109496]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:10 compute-0 ceph-mon[75176]: 3.a deep-scrub starts
Nov 29 05:13:10 compute-0 ceph-mon[75176]: 3.a deep-scrub ok
Nov 29 05:13:10 compute-0 ceph-mon[75176]: pgmap v245: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 1 objects/s recovering
Nov 29 05:13:11 compute-0 sudo[109783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hagsuqdhejpqpfrdyptzpswogqgewtfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393190.4012253-145-169020129958094/AnsiballZ_selinux.py'
Nov 29 05:13:11 compute-0 sudo[109783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 05:13:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:13:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:13:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:13:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:13:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:13:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:13:11 compute-0 python3.9[109785]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 05:13:11 compute-0 sudo[109783]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:12 compute-0 sudo[109935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqugukrephnlhblxlfkozlvenqncxsct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393191.8613026-156-23266039498582/AnsiballZ_command.py'
Nov 29 05:13:12 compute-0 sudo[109935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:12 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 29 05:13:12 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 29 05:13:12 compute-0 python3.9[109937]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 05:13:12 compute-0 sudo[109935]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:12 compute-0 ceph-mon[75176]: pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 05:13:13 compute-0 sudo[110087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfnjapbwhlfvnmgjpzplmwpnuoojrdtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393192.880925-164-107114528395293/AnsiballZ_file.py'
Nov 29 05:13:13 compute-0 sudo[110087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:13 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 29 05:13:13 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 29 05:13:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:13 compute-0 python3.9[110089]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:13:13 compute-0 sudo[110087]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:13 compute-0 ceph-mon[75176]: 8.15 scrub starts
Nov 29 05:13:13 compute-0 ceph-mon[75176]: 8.15 scrub ok
Nov 29 05:13:13 compute-0 ceph-mon[75176]: 5.13 scrub starts
Nov 29 05:13:13 compute-0 ceph-mon[75176]: 5.13 scrub ok
Nov 29 05:13:14 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 29 05:13:14 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 29 05:13:14 compute-0 sudo[110239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knlokmiytruimpdujhcrfxtwkdfended ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393193.56915-172-152377432035652/AnsiballZ_mount.py'
Nov 29 05:13:14 compute-0 sudo[110239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:14 compute-0 python3.9[110241]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 05:13:14 compute-0 sudo[110239]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:14 compute-0 ceph-mon[75176]: pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:15 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 29 05:13:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:15 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 29 05:13:15 compute-0 sudo[110391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmjjnycknxehjlngatptupaxzeqwzjhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393195.2160954-200-146814515810853/AnsiballZ_file.py'
Nov 29 05:13:15 compute-0 sudo[110391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:15 compute-0 python3.9[110393]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:13:15 compute-0 sudo[110391]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:15 compute-0 ceph-mon[75176]: 11.e scrub starts
Nov 29 05:13:15 compute-0 ceph-mon[75176]: 11.e scrub ok
Nov 29 05:13:15 compute-0 ceph-mon[75176]: 5.12 scrub starts
Nov 29 05:13:15 compute-0 ceph-mon[75176]: 5.12 scrub ok
Nov 29 05:13:16 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 29 05:13:16 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 29 05:13:16 compute-0 sudo[110543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsarccjclgsjlesibcpisjuxskoovehs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393195.9609735-208-236797962540842/AnsiballZ_stat.py'
Nov 29 05:13:16 compute-0 sudo[110543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:16 compute-0 python3.9[110545]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:13:16 compute-0 sudo[110543]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:16 compute-0 ceph-mon[75176]: pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:16 compute-0 sudo[110621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezrihwufeskflcdpxldwfcfluqtvkonu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393195.9609735-208-236797962540842/AnsiballZ_file.py'
Nov 29 05:13:16 compute-0 sudo[110621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:17 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 29 05:13:17 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 29 05:13:17 compute-0 python3.9[110623]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:13:17 compute-0 sudo[110621]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:17 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 29 05:13:17 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 29 05:13:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:17 compute-0 ceph-mon[75176]: 8.11 scrub starts
Nov 29 05:13:17 compute-0 ceph-mon[75176]: 8.11 scrub ok
Nov 29 05:13:17 compute-0 sudo[110773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvfuhqdhidzmomvgskbeuryuauouruks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393197.624511-229-47811023823441/AnsiballZ_stat.py'
Nov 29 05:13:17 compute-0 sudo[110773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:18 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 29 05:13:18 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 29 05:13:18 compute-0 python3.9[110775]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:13:18 compute-0 sudo[110773]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:18 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 29 05:13:18 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 29 05:13:18 compute-0 ceph-mon[75176]: 7.18 scrub starts
Nov 29 05:13:18 compute-0 ceph-mon[75176]: 7.18 scrub ok
Nov 29 05:13:18 compute-0 ceph-mon[75176]: 8.12 scrub starts
Nov 29 05:13:18 compute-0 ceph-mon[75176]: 8.12 scrub ok
Nov 29 05:13:18 compute-0 ceph-mon[75176]: pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:19 compute-0 sudo[110927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbkuamoiacmlfoisplpewnlwnjpahhsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393198.8327749-242-34125160307723/AnsiballZ_getent.py'
Nov 29 05:13:19 compute-0 sudo[110927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:19 compute-0 python3.9[110929]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 05:13:19 compute-0 sudo[110927]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:19 compute-0 ceph-mon[75176]: 7.f scrub starts
Nov 29 05:13:19 compute-0 ceph-mon[75176]: 7.f scrub ok
Nov 29 05:13:19 compute-0 ceph-mon[75176]: 3.1d scrub starts
Nov 29 05:13:19 compute-0 ceph-mon[75176]: 3.1d scrub ok
Nov 29 05:13:20 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 29 05:13:20 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 29 05:13:20 compute-0 sudo[111080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hknuxneabibxvfsstqwewphkwcwfjvdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393199.8838387-252-48679646679978/AnsiballZ_getent.py'
Nov 29 05:13:20 compute-0 sudo[111080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:20 compute-0 python3.9[111082]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 05:13:20 compute-0 sudo[111080]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:20 compute-0 ceph-mon[75176]: pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:21 compute-0 sudo[111233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvgvfgmrtzyacodnwaawigmkikfjgqic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393200.6332328-260-199491228740296/AnsiballZ_group.py'
Nov 29 05:13:21 compute-0 sudo[111233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:21 compute-0 python3.9[111235]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 05:13:21 compute-0 sudo[111233]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:21 compute-0 ceph-mon[75176]: 8.9 scrub starts
Nov 29 05:13:21 compute-0 ceph-mon[75176]: 8.9 scrub ok
Nov 29 05:13:22 compute-0 sudo[111385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdgogzkovbusadbtwnjmustpqhaftshu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393201.8209715-269-94345616358728/AnsiballZ_file.py'
Nov 29 05:13:22 compute-0 sudo[111385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:22 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.18 deep-scrub starts
Nov 29 05:13:22 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.18 deep-scrub ok
Nov 29 05:13:22 compute-0 python3.9[111387]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 05:13:22 compute-0 sudo[111385]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:23 compute-0 ceph-mon[75176]: pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:23 compute-0 ceph-mon[75176]: 3.18 deep-scrub starts
Nov 29 05:13:23 compute-0 ceph-mon[75176]: 3.18 deep-scrub ok
Nov 29 05:13:23 compute-0 sudo[111537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdccmboxbfkuyabhguhxbtngrmmqxkts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393202.8770032-280-271147800664613/AnsiballZ_dnf.py'
Nov 29 05:13:23 compute-0 sudo[111537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:23 compute-0 python3.9[111539]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:13:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:24 compute-0 sudo[111537]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:25 compute-0 ceph-mon[75176]: pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:25 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Nov 29 05:13:25 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Nov 29 05:13:25 compute-0 sudo[111690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihjhxossuseooqzmdgqliuznolsqobav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393205.0310414-288-146108964140281/AnsiballZ_file.py'
Nov 29 05:13:25 compute-0 sudo[111690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:25 compute-0 python3.9[111692]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:13:25 compute-0 sudo[111690]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:26 compute-0 ceph-mon[75176]: 2.15 scrub starts
Nov 29 05:13:26 compute-0 ceph-mon[75176]: 2.15 scrub ok
Nov 29 05:13:26 compute-0 sudo[111842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beobtrekwrjullaqotbktggoolowicyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393205.8505237-296-72846160277974/AnsiballZ_stat.py'
Nov 29 05:13:26 compute-0 sudo[111842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:26 compute-0 python3.9[111844]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:13:26 compute-0 sudo[111842]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:26 compute-0 sudo[111920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvlpithcadmhkzgaclttmxcdjjkdcruc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393205.8505237-296-72846160277974/AnsiballZ_file.py'
Nov 29 05:13:26 compute-0 sudo[111920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:26 compute-0 python3.9[111922]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:13:26 compute-0 sudo[111920]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:27 compute-0 ceph-mon[75176]: pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:27 compute-0 sudo[112072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rceburxbwoajqnojyappgqhhtmarvggm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393207.2422173-309-254235649104646/AnsiballZ_stat.py'
Nov 29 05:13:27 compute-0 sudo[112072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:27 compute-0 python3.9[112074]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:13:27 compute-0 sudo[112072]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:28 compute-0 sudo[112150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgiftkyeospdngoiguozukkziwjngsrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393207.2422173-309-254235649104646/AnsiballZ_file.py'
Nov 29 05:13:28 compute-0 sudo[112150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:28 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 29 05:13:28 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 29 05:13:28 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 29 05:13:28 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 29 05:13:28 compute-0 python3.9[112152]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:13:28 compute-0 sudo[112150]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:29 compute-0 ceph-mon[75176]: pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:29 compute-0 ceph-mon[75176]: 11.12 scrub starts
Nov 29 05:13:29 compute-0 ceph-mon[75176]: 11.12 scrub ok
Nov 29 05:13:29 compute-0 ceph-mon[75176]: 10.1a scrub starts
Nov 29 05:13:29 compute-0 ceph-mon[75176]: 10.1a scrub ok
Nov 29 05:13:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:29 compute-0 sudo[112302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulctepotmzoubcafptzcnbrxskfyguug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393208.949277-324-153077505173306/AnsiballZ_dnf.py'
Nov 29 05:13:29 compute-0 sudo[112302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:29 compute-0 python3.9[112304]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:13:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 29 05:13:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 29 05:13:30 compute-0 sudo[112302]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:31 compute-0 ceph-mon[75176]: pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:31 compute-0 ceph-mon[75176]: 3.7 scrub starts
Nov 29 05:13:31 compute-0 ceph-mon[75176]: 3.7 scrub ok
Nov 29 05:13:31 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 29 05:13:31 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 29 05:13:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:31 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 29 05:13:31 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 29 05:13:31 compute-0 python3.9[112455]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:13:32 compute-0 ceph-mon[75176]: 7.1c scrub starts
Nov 29 05:13:32 compute-0 ceph-mon[75176]: 7.1c scrub ok
Nov 29 05:13:32 compute-0 ceph-mon[75176]: 10.19 scrub starts
Nov 29 05:13:32 compute-0 ceph-mon[75176]: 10.19 scrub ok
Nov 29 05:13:32 compute-0 python3.9[112607]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 05:13:33 compute-0 ceph-mon[75176]: pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:33 compute-0 python3.9[112757]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:13:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:34 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 29 05:13:34 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 29 05:13:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 29 05:13:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 29 05:13:34 compute-0 sudo[112907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhzsdbqusccrzeqlvedbyqdvzxiarjra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393213.931186-365-79797887730277/AnsiballZ_systemd.py'
Nov 29 05:13:34 compute-0 sudo[112907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:34 compute-0 python3.9[112909]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:13:34 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 05:13:34 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 05:13:34 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 05:13:34 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 05:13:35 compute-0 ceph-mon[75176]: pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:35 compute-0 ceph-mon[75176]: 8.b scrub starts
Nov 29 05:13:35 compute-0 ceph-mon[75176]: 8.b scrub ok
Nov 29 05:13:35 compute-0 ceph-mon[75176]: 5.16 scrub starts
Nov 29 05:13:35 compute-0 ceph-mon[75176]: 5.16 scrub ok
Nov 29 05:13:35 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 05:13:35 compute-0 sudo[112907]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:35 compute-0 python3.9[113071]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 05:13:36 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 29 05:13:36 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 29 05:13:36 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 29 05:13:36 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 29 05:13:37 compute-0 ceph-mon[75176]: pgmap v258: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:37 compute-0 ceph-mon[75176]: 11.1 scrub starts
Nov 29 05:13:37 compute-0 ceph-mon[75176]: 11.1 scrub ok
Nov 29 05:13:37 compute-0 ceph-mon[75176]: 7.2 scrub starts
Nov 29 05:13:37 compute-0 ceph-mon[75176]: 7.2 scrub ok
Nov 29 05:13:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:38 compute-0 ceph-mon[75176]: pgmap v259: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:38 compute-0 sudo[113221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuimykesiltfaazcfzmmfumcimvjjxtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393217.8501594-422-195425316822071/AnsiballZ_systemd.py'
Nov 29 05:13:38 compute-0 sudo[113221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:38 compute-0 python3.9[113223]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:13:38 compute-0 sudo[113221]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:39 compute-0 sudo[113375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjhuuzufkfearptzjlhoyabizecdyhtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393218.7597249-422-98239456634220/AnsiballZ_systemd.py'
Nov 29 05:13:39 compute-0 sudo[113375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:39 compute-0 python3.9[113377]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:13:39 compute-0 sudo[113375]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:40 compute-0 sshd-session[106404]: Connection closed by 192.168.122.30 port 52622
Nov 29 05:13:40 compute-0 sshd-session[106401]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:13:40 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 29 05:13:40 compute-0 systemd[1]: session-34.scope: Consumed 1min 5.539s CPU time.
Nov 29 05:13:40 compute-0 systemd-logind[793]: Session 34 logged out. Waiting for processes to exit.
Nov 29 05:13:40 compute-0 systemd-logind[793]: Removed session 34.
Nov 29 05:13:40 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 29 05:13:40 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 29 05:13:40 compute-0 ceph-mon[75176]: pgmap v260: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:41 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 29 05:13:41 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:13:41
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'images', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'backups']
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:13:41 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:13:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:13:41 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 29 05:13:41 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 29 05:13:41 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 29 05:13:41 compute-0 ceph-mon[75176]: 7.6 scrub starts
Nov 29 05:13:41 compute-0 ceph-mon[75176]: 7.6 scrub ok
Nov 29 05:13:42 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 29 05:13:42 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 29 05:13:42 compute-0 ceph-mon[75176]: 8.f scrub starts
Nov 29 05:13:42 compute-0 ceph-mon[75176]: 8.f scrub ok
Nov 29 05:13:42 compute-0 ceph-mon[75176]: pgmap v261: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:42 compute-0 ceph-mon[75176]: 11.d scrub starts
Nov 29 05:13:42 compute-0 ceph-mon[75176]: 11.d scrub ok
Nov 29 05:13:42 compute-0 ceph-mon[75176]: 5.9 scrub starts
Nov 29 05:13:42 compute-0 ceph-mon[75176]: 5.9 scrub ok
Nov 29 05:13:43 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 29 05:13:43 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 29 05:13:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:43 compute-0 ceph-mon[75176]: 11.11 scrub starts
Nov 29 05:13:43 compute-0 ceph-mon[75176]: 11.11 scrub ok
Nov 29 05:13:44 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 29 05:13:44 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 29 05:13:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:44 compute-0 ceph-mon[75176]: 7.4 scrub starts
Nov 29 05:13:44 compute-0 ceph-mon[75176]: 7.4 scrub ok
Nov 29 05:13:44 compute-0 ceph-mon[75176]: pgmap v262: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:45 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 29 05:13:45 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 29 05:13:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:45 compute-0 ceph-mon[75176]: 3.c scrub starts
Nov 29 05:13:45 compute-0 ceph-mon[75176]: 3.c scrub ok
Nov 29 05:13:46 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 29 05:13:46 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 29 05:13:46 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 29 05:13:46 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 29 05:13:46 compute-0 sshd-session[113404]: Accepted publickey for zuul from 192.168.122.30 port 46706 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:13:46 compute-0 systemd-logind[793]: New session 35 of user zuul.
Nov 29 05:13:46 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 29 05:13:46 compute-0 sshd-session[113404]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:13:46 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 29 05:13:46 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 29 05:13:46 compute-0 ceph-mon[75176]: 11.4 scrub starts
Nov 29 05:13:46 compute-0 ceph-mon[75176]: 11.4 scrub ok
Nov 29 05:13:46 compute-0 ceph-mon[75176]: pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:47 compute-0 python3.9[113557]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:13:47 compute-0 ceph-mon[75176]: 7.9 scrub starts
Nov 29 05:13:47 compute-0 ceph-mon[75176]: 7.9 scrub ok
Nov 29 05:13:47 compute-0 ceph-mon[75176]: 7.1 scrub starts
Nov 29 05:13:47 compute-0 ceph-mon[75176]: 7.1 scrub ok
Nov 29 05:13:47 compute-0 ceph-mon[75176]: 2.d scrub starts
Nov 29 05:13:47 compute-0 ceph-mon[75176]: 2.d scrub ok
Nov 29 05:13:48 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 29 05:13:48 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 29 05:13:48 compute-0 ceph-mon[75176]: pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:48 compute-0 ceph-mon[75176]: 8.6 scrub starts
Nov 29 05:13:48 compute-0 sudo[113711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plwbriswwnzhxfwwtnkdjybxlqibhoui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393228.1081057-36-220258922196184/AnsiballZ_getent.py'
Nov 29 05:13:48 compute-0 sudo[113711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:48 compute-0 python3.9[113713]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 05:13:48 compute-0 sudo[113711]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:49 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 29 05:13:49 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 29 05:13:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:49 compute-0 ceph-mon[75176]: 8.6 scrub ok
Nov 29 05:13:49 compute-0 sudo[113864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrpjarbzrymrgqbujdmwnqhpfqoznjhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393229.3442707-48-69691290767004/AnsiballZ_setup.py'
Nov 29 05:13:49 compute-0 sudo[113864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:50 compute-0 python3.9[113866]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:13:50 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 29 05:13:50 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 29 05:13:50 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 29 05:13:50 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 29 05:13:50 compute-0 sudo[113864]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:50 compute-0 ceph-mon[75176]: 8.d scrub starts
Nov 29 05:13:50 compute-0 ceph-mon[75176]: 8.d scrub ok
Nov 29 05:13:50 compute-0 ceph-mon[75176]: pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:50 compute-0 sudo[113948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kygqpiyspmvamhujamwkpkncotgfqdth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393229.3442707-48-69691290767004/AnsiballZ_dnf.py'
Nov 29 05:13:50 compute-0 sudo[113948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:50 compute-0 python3.9[113950]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:13:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:51 compute-0 ceph-mon[75176]: 3.f scrub starts
Nov 29 05:13:51 compute-0 ceph-mon[75176]: 3.f scrub ok
Nov 29 05:13:51 compute-0 ceph-mon[75176]: 7.5 scrub starts
Nov 29 05:13:51 compute-0 ceph-mon[75176]: 7.5 scrub ok
Nov 29 05:13:51 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 29 05:13:51 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 29 05:13:52 compute-0 sudo[113948]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:52 compute-0 ceph-mon[75176]: pgmap v266: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:52 compute-0 ceph-mon[75176]: 11.6 scrub starts
Nov 29 05:13:52 compute-0 ceph-mon[75176]: 11.6 scrub ok
Nov 29 05:13:52 compute-0 sudo[114101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bevysyyvsvhpqtpghuwqcqruszikhicw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393232.5338466-62-60068619224351/AnsiballZ_dnf.py'
Nov 29 05:13:52 compute-0 sudo[114101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:52 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 29 05:13:52 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 29 05:13:53 compute-0 python3.9[114103]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:13:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:53 compute-0 ceph-mon[75176]: 11.19 scrub starts
Nov 29 05:13:53 compute-0 ceph-mon[75176]: 11.19 scrub ok
Nov 29 05:13:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:54 compute-0 sudo[114101]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:54 compute-0 ceph-mon[75176]: pgmap v267: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:54 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 29 05:13:54 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 29 05:13:55 compute-0 sudo[114254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvyanimtaquxynuwnflthewrtrgwtcrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393234.5711126-70-106817141790223/AnsiballZ_systemd.py'
Nov 29 05:13:55 compute-0 sudo[114254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:55 compute-0 python3.9[114256]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 05:13:55 compute-0 ceph-mon[75176]: 8.1a scrub starts
Nov 29 05:13:55 compute-0 ceph-mon[75176]: 8.1a scrub ok
Nov 29 05:13:55 compute-0 sudo[114254]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:56 compute-0 python3.9[114409]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:13:56 compute-0 ceph-mon[75176]: pgmap v268: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:56 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 29 05:13:56 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 29 05:13:57 compute-0 sudo[114559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myqjgjtxgqaencfycqoxkzsifebcjvkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393236.6166854-88-61370593582531/AnsiballZ_sefcontext.py'
Nov 29 05:13:57 compute-0 sudo[114559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:57 compute-0 python3.9[114561]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 05:13:57 compute-0 ceph-mon[75176]: 3.12 scrub starts
Nov 29 05:13:57 compute-0 ceph-mon[75176]: 3.12 scrub ok
Nov 29 05:13:57 compute-0 sudo[114559]: pam_unix(sudo:session): session closed for user root
Nov 29 05:13:58 compute-0 ceph-mon[75176]: pgmap v269: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:58 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 29 05:13:58 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 29 05:13:58 compute-0 python3.9[114711]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:13:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:13:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:13:59 compute-0 sudo[114867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehyhwonumxgbsctwtpoiekhvpyebqhgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393239.10658-106-264465547205049/AnsiballZ_dnf.py'
Nov 29 05:13:59 compute-0 sudo[114867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:13:59 compute-0 ceph-mon[75176]: 10.6 scrub starts
Nov 29 05:13:59 compute-0 ceph-mon[75176]: 10.6 scrub ok
Nov 29 05:13:59 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 29 05:13:59 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 29 05:13:59 compute-0 python3.9[114869]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:14:00 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 29 05:14:00 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 29 05:14:00 compute-0 ceph-mon[75176]: pgmap v270: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:00 compute-0 ceph-mon[75176]: 10.11 scrub starts
Nov 29 05:14:00 compute-0 ceph-mon[75176]: 10.11 scrub ok
Nov 29 05:14:01 compute-0 sudo[114867]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:01 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 29 05:14:01 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 29 05:14:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:01 compute-0 ceph-mon[75176]: 11.9 scrub starts
Nov 29 05:14:01 compute-0 ceph-mon[75176]: 11.9 scrub ok
Nov 29 05:14:01 compute-0 sudo[115020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbloysqfpieumplwvziqiaacormmylte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393241.3134155-114-252852255459050/AnsiballZ_command.py'
Nov 29 05:14:01 compute-0 sudo[115020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:02 compute-0 python3.9[115022]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:14:02 compute-0 ceph-mon[75176]: 7.c scrub starts
Nov 29 05:14:02 compute-0 ceph-mon[75176]: 7.c scrub ok
Nov 29 05:14:02 compute-0 ceph-mon[75176]: pgmap v271: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:02 compute-0 sudo[115020]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:03 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 29 05:14:03 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 29 05:14:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:03 compute-0 sudo[115307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkxgwrronadvmzzyxrocuykrkghuqdav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393242.961594-122-270498336782552/AnsiballZ_file.py'
Nov 29 05:14:03 compute-0 sudo[115307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:03 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 29 05:14:03 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 29 05:14:03 compute-0 python3.9[115309]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 05:14:03 compute-0 sudo[115307]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:04 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 29 05:14:04 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 29 05:14:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:04 compute-0 python3.9[115459]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:14:04 compute-0 ceph-mon[75176]: 3.8 scrub starts
Nov 29 05:14:04 compute-0 ceph-mon[75176]: 3.8 scrub ok
Nov 29 05:14:04 compute-0 ceph-mon[75176]: pgmap v272: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:04 compute-0 ceph-mon[75176]: 10.10 scrub starts
Nov 29 05:14:04 compute-0 ceph-mon[75176]: 10.10 scrub ok
Nov 29 05:14:04 compute-0 systemd[76809]: Created slice User Background Tasks Slice.
Nov 29 05:14:04 compute-0 systemd[76809]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 05:14:04 compute-0 sudo[115612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtdiwnrfegsmrtyoccmkshoibepnwknr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393244.6303694-138-143098346920943/AnsiballZ_dnf.py'
Nov 29 05:14:04 compute-0 sudo[115612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:04 compute-0 systemd[76809]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 05:14:05 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 29 05:14:05 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 29 05:14:05 compute-0 python3.9[115614]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:14:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:05 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 29 05:14:05 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 29 05:14:05 compute-0 ceph-mon[75176]: 3.9 scrub starts
Nov 29 05:14:05 compute-0 ceph-mon[75176]: 3.9 scrub ok
Nov 29 05:14:05 compute-0 sudo[115616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:14:05 compute-0 sudo[115616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:05 compute-0 sudo[115616]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:05 compute-0 sudo[115641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:14:05 compute-0 sudo[115641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:05 compute-0 sudo[115641]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:05 compute-0 sudo[115666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:14:05 compute-0 sudo[115666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:05 compute-0 sudo[115666]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:05 compute-0 sudo[115691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:14:05 compute-0 sudo[115691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:05 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 29 05:14:06 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 29 05:14:06 compute-0 sudo[115691]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:14:06 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:14:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:14:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:14:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:14:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:14:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:14:06 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:14:06 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 2e5ac6cc-a889-4d23-b5d3-f4b6e7c751cc does not exist
Nov 29 05:14:06 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 3d883fe7-32d6-45d6-9707-aecaad9b7fab does not exist
Nov 29 05:14:06 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 38e67fe7-2e60-4429-9227-5ac4acbbe768 does not exist
Nov 29 05:14:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:14:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:14:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:14:06 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:14:06 compute-0 sudo[115746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:14:06 compute-0 sudo[115746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:06 compute-0 sudo[115746]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:06 compute-0 sudo[115771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:14:06 compute-0 sudo[115771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:06 compute-0 sudo[115771]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:06 compute-0 sudo[115796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:14:06 compute-0 sudo[115796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:06 compute-0 sudo[115796]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:06 compute-0 sudo[115612]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:06 compute-0 sudo[115821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:14:06 compute-0 sudo[115821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:06 compute-0 ceph-mon[75176]: 8.18 scrub starts
Nov 29 05:14:06 compute-0 ceph-mon[75176]: 8.18 scrub ok
Nov 29 05:14:06 compute-0 ceph-mon[75176]: pgmap v273: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:06 compute-0 ceph-mon[75176]: 10.12 scrub starts
Nov 29 05:14:06 compute-0 ceph-mon[75176]: 10.12 scrub ok
Nov 29 05:14:06 compute-0 ceph-mon[75176]: 8.1d scrub starts
Nov 29 05:14:06 compute-0 ceph-mon[75176]: 8.1d scrub ok
Nov 29 05:14:06 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:14:06 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:14:06 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:14:06 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:14:06 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:14:06 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:14:06 compute-0 podman[115940]: 2025-11-29 05:14:06.941699181 +0000 UTC m=+0.043193703 container create 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 05:14:06 compute-0 systemd[1]: Started libpod-conmon-8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7.scope.
Nov 29 05:14:07 compute-0 podman[115940]: 2025-11-29 05:14:06.923475333 +0000 UTC m=+0.024969855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:14:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:14:07 compute-0 podman[115940]: 2025-11-29 05:14:07.053227353 +0000 UTC m=+0.154721895 container init 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:14:07 compute-0 podman[115940]: 2025-11-29 05:14:07.06980234 +0000 UTC m=+0.171296862 container start 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:14:07 compute-0 podman[115940]: 2025-11-29 05:14:07.073324097 +0000 UTC m=+0.174818639 container attach 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:14:07 compute-0 systemd[1]: libpod-8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7.scope: Deactivated successfully.
Nov 29 05:14:07 compute-0 lucid_yonath[115990]: 167 167
Nov 29 05:14:07 compute-0 conmon[115990]: conmon 8345e7ad5a2719a8ef77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7.scope/container/memory.events
Nov 29 05:14:07 compute-0 podman[115940]: 2025-11-29 05:14:07.081725363 +0000 UTC m=+0.183219905 container died 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:14:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdae0fa13ce8fe99b5e4a497d057ea1f01f04005168c7d8dfdd75da6c86d0962-merged.mount: Deactivated successfully.
Nov 29 05:14:07 compute-0 podman[115940]: 2025-11-29 05:14:07.139686617 +0000 UTC m=+0.241181159 container remove 8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 05:14:07 compute-0 systemd[1]: libpod-conmon-8345e7ad5a2719a8ef771e3f066c1b58d2658de85d98d9db5409acf2e460c7a7.scope: Deactivated successfully.
Nov 29 05:14:07 compute-0 sudo[116069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qilqmyawbwjypmnfjdnceqnpazvctusq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393246.8497422-147-2325962735487/AnsiballZ_dnf.py'
Nov 29 05:14:07 compute-0 sudo[116069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:07 compute-0 podman[116077]: 2025-11-29 05:14:07.326675484 +0000 UTC m=+0.053357662 container create f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:14:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:07 compute-0 podman[116077]: 2025-11-29 05:14:07.299178868 +0000 UTC m=+0.025861056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:14:07 compute-0 systemd[1]: Started libpod-conmon-f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32.scope.
Nov 29 05:14:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:07 compute-0 podman[116077]: 2025-11-29 05:14:07.46159356 +0000 UTC m=+0.188275738 container init f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:14:07 compute-0 podman[116077]: 2025-11-29 05:14:07.478097186 +0000 UTC m=+0.204779364 container start f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:14:07 compute-0 podman[116077]: 2025-11-29 05:14:07.484028612 +0000 UTC m=+0.210710790 container attach f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:14:07 compute-0 python3.9[116072]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:14:08 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 29 05:14:08 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 29 05:14:08 compute-0 intelligent_archimedes[116093]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:14:08 compute-0 intelligent_archimedes[116093]: --> relative data size: 1.0
Nov 29 05:14:08 compute-0 intelligent_archimedes[116093]: --> All data devices are unavailable
Nov 29 05:14:08 compute-0 podman[116077]: 2025-11-29 05:14:08.575026109 +0000 UTC m=+1.301708247 container died f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 05:14:08 compute-0 systemd[1]: libpod-f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32.scope: Deactivated successfully.
Nov 29 05:14:08 compute-0 systemd[1]: libpod-f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32.scope: Consumed 1.045s CPU time.
Nov 29 05:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-31fb2547ec4dc06f592fed606fd2feacc6d738e1f70a482576afd6a10c5d167b-merged.mount: Deactivated successfully.
Nov 29 05:14:08 compute-0 podman[116077]: 2025-11-29 05:14:08.634615034 +0000 UTC m=+1.361297182 container remove f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:14:08 compute-0 ceph-mon[75176]: pgmap v274: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:08 compute-0 systemd[1]: libpod-conmon-f32141767912fa839e4afc4ddafca3c78704a59f4d9f52d2aee31c413cc99e32.scope: Deactivated successfully.
Nov 29 05:14:08 compute-0 sudo[115821]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:08 compute-0 sudo[116135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:14:08 compute-0 sudo[116135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:08 compute-0 sudo[116135]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:08 compute-0 sudo[116069]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:08 compute-0 sudo[116160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:14:08 compute-0 sudo[116160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:08 compute-0 sudo[116160]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:08 compute-0 sudo[116186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:14:08 compute-0 sudo[116186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:08 compute-0 sudo[116186]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:08 compute-0 sudo[116234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:14:08 compute-0 sudo[116234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:09 compute-0 podman[116323]: 2025-11-29 05:14:09.202708379 +0000 UTC m=+0.061975525 container create 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:14:09 compute-0 systemd[1]: Started libpod-conmon-02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29.scope.
Nov 29 05:14:09 compute-0 podman[116323]: 2025-11-29 05:14:09.169951773 +0000 UTC m=+0.029218989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:14:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:14:09 compute-0 podman[116323]: 2025-11-29 05:14:09.289412761 +0000 UTC m=+0.148679937 container init 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:14:09 compute-0 podman[116323]: 2025-11-29 05:14:09.299398706 +0000 UTC m=+0.158665872 container start 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:14:09 compute-0 podman[116323]: 2025-11-29 05:14:09.303564799 +0000 UTC m=+0.162832065 container attach 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:14:09 compute-0 goofy_khorana[116370]: 167 167
Nov 29 05:14:09 compute-0 systemd[1]: libpod-02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29.scope: Deactivated successfully.
Nov 29 05:14:09 compute-0 podman[116323]: 2025-11-29 05:14:09.306418129 +0000 UTC m=+0.165685265 container died 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:14:09 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 29 05:14:09 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 29 05:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-115eeb9252794fd8aa232c5db2a0b674fef0b3bc0c7f0a5554a05e1bcfeb1bbb-merged.mount: Deactivated successfully.
Nov 29 05:14:09 compute-0 podman[116323]: 2025-11-29 05:14:09.347897217 +0000 UTC m=+0.207164353 container remove 02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_khorana, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:14:09 compute-0 systemd[1]: libpod-conmon-02ffcefe6349d68d2a34ac0e90791c60c9eb982383dcdc7dd08c16b5171ada29.scope: Deactivated successfully.
Nov 29 05:14:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:09 compute-0 sudo[116467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bisbadleoejzqhjrhpegzukisduzodij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393249.124183-159-169272923684468/AnsiballZ_stat.py'
Nov 29 05:14:09 compute-0 sudo[116467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:09 compute-0 podman[116465]: 2025-11-29 05:14:09.516078502 +0000 UTC m=+0.046725100 container create 4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:14:09 compute-0 systemd[1]: Started libpod-conmon-4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68.scope.
Nov 29 05:14:09 compute-0 podman[116465]: 2025-11-29 05:14:09.501291888 +0000 UTC m=+0.031938496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:14:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283186c87e133afe32a276f4b77a548c5926116cbd15cbea39d78ae5cd982159/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283186c87e133afe32a276f4b77a548c5926116cbd15cbea39d78ae5cd982159/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283186c87e133afe32a276f4b77a548c5926116cbd15cbea39d78ae5cd982159/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283186c87e133afe32a276f4b77a548c5926116cbd15cbea39d78ae5cd982159/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:09 compute-0 podman[116465]: 2025-11-29 05:14:09.617750031 +0000 UTC m=+0.148396709 container init 4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 05:14:09 compute-0 podman[116465]: 2025-11-29 05:14:09.628653559 +0000 UTC m=+0.159300197 container start 4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:14:09 compute-0 podman[116465]: 2025-11-29 05:14:09.632690099 +0000 UTC m=+0.163336737 container attach 4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 05:14:09 compute-0 ceph-mon[75176]: 7.e scrub starts
Nov 29 05:14:09 compute-0 ceph-mon[75176]: 7.e scrub ok
Nov 29 05:14:09 compute-0 python3.9[116474]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:14:09 compute-0 sudo[116467]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:09 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.15 deep-scrub starts
Nov 29 05:14:10 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.15 deep-scrub ok
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]: {
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:     "0": [
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:         {
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "devices": [
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "/dev/loop3"
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             ],
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_name": "ceph_lv0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_size": "21470642176",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "name": "ceph_lv0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "tags": {
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.cluster_name": "ceph",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.crush_device_class": "",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.encrypted": "0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.osd_id": "0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.type": "block",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.vdo": "0"
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             },
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "type": "block",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "vg_name": "ceph_vg0"
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:         }
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:     ],
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:     "1": [
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:         {
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "devices": [
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "/dev/loop4"
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             ],
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_name": "ceph_lv1",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_size": "21470642176",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "name": "ceph_lv1",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "tags": {
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.cluster_name": "ceph",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.crush_device_class": "",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.encrypted": "0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.osd_id": "1",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.type": "block",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.vdo": "0"
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             },
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "type": "block",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "vg_name": "ceph_vg1"
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:         }
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:     ],
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:     "2": [
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:         {
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "devices": [
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "/dev/loop5"
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             ],
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_name": "ceph_lv2",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_size": "21470642176",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "name": "ceph_lv2",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "tags": {
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.cluster_name": "ceph",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.crush_device_class": "",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.encrypted": "0",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.osd_id": "2",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.type": "block",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:                 "ceph.vdo": "0"
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             },
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "type": "block",
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:             "vg_name": "ceph_vg2"
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:         }
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]:     ]
Nov 29 05:14:10 compute-0 hopeful_dewdney[116485]: }
Nov 29 05:14:10 compute-0 systemd[1]: libpod-4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68.scope: Deactivated successfully.
Nov 29 05:14:10 compute-0 podman[116465]: 2025-11-29 05:14:10.43663352 +0000 UTC m=+0.967280128 container died 4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:14:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-283186c87e133afe32a276f4b77a548c5926116cbd15cbea39d78ae5cd982159-merged.mount: Deactivated successfully.
Nov 29 05:14:10 compute-0 podman[116465]: 2025-11-29 05:14:10.509743767 +0000 UTC m=+1.040390375 container remove 4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:14:10 compute-0 systemd[1]: libpod-conmon-4293dd5290ca316db78221ff8df4e82ae17a8241f586d294f28a2fa53088af68.scope: Deactivated successfully.
Nov 29 05:14:10 compute-0 sudo[116656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alojmkxbnacdeldccjozxgooxwugaczw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393249.9295118-167-84699418367100/AnsiballZ_slurp.py'
Nov 29 05:14:10 compute-0 sudo[116656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:10 compute-0 sudo[116234]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:10 compute-0 sudo[116659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:14:10 compute-0 sudo[116659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:10 compute-0 sudo[116659]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:10 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 29 05:14:10 compute-0 rsyslogd[1003]: imjournal from <np0005539482:hopeful_dewdney>: begin to drop messages due to rate-limiting
Nov 29 05:14:10 compute-0 ceph-mon[75176]: 11.8 scrub starts
Nov 29 05:14:10 compute-0 ceph-mon[75176]: 11.8 scrub ok
Nov 29 05:14:10 compute-0 ceph-mon[75176]: pgmap v275: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:10 compute-0 ceph-mon[75176]: 3.15 deep-scrub starts
Nov 29 05:14:10 compute-0 ceph-mon[75176]: 3.15 deep-scrub ok
Nov 29 05:14:10 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 29 05:14:10 compute-0 sudo[116684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:14:10 compute-0 sudo[116684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:10 compute-0 sudo[116684]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:10 compute-0 python3.9[116658]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 29 05:14:10 compute-0 sudo[116709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:14:10 compute-0 sudo[116709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:10 compute-0 sudo[116709]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:10 compute-0 sudo[116656]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:10 compute-0 sudo[116734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:14:10 compute-0 sudo[116734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:11 compute-0 podman[116824]: 2025-11-29 05:14:11.191866734 +0000 UTC m=+0.068151746 container create 3996d17cdad6497c0cc54ce1ba294018df235fd04c05f78ff4b2875355c869e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:14:11 compute-0 systemd[1]: Started libpod-conmon-3996d17cdad6497c0cc54ce1ba294018df235fd04c05f78ff4b2875355c869e6.scope.
Nov 29 05:14:11 compute-0 podman[116824]: 2025-11-29 05:14:11.165386603 +0000 UTC m=+0.041671585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:14:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:14:11 compute-0 podman[116824]: 2025-11-29 05:14:11.281688842 +0000 UTC m=+0.157973824 container init 3996d17cdad6497c0cc54ce1ba294018df235fd04c05f78ff4b2875355c869e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:14:11 compute-0 podman[116824]: 2025-11-29 05:14:11.292552609 +0000 UTC m=+0.168837601 container start 3996d17cdad6497c0cc54ce1ba294018df235fd04c05f78ff4b2875355c869e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 05:14:11 compute-0 podman[116824]: 2025-11-29 05:14:11.297201103 +0000 UTC m=+0.173486085 container attach 3996d17cdad6497c0cc54ce1ba294018df235fd04c05f78ff4b2875355c869e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:14:11 compute-0 amazing_fermat[116840]: 167 167
Nov 29 05:14:11 compute-0 systemd[1]: libpod-3996d17cdad6497c0cc54ce1ba294018df235fd04c05f78ff4b2875355c869e6.scope: Deactivated successfully.
Nov 29 05:14:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:14:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:14:11 compute-0 podman[116845]: 2025-11-29 05:14:11.359382391 +0000 UTC m=+0.040174588 container died 3996d17cdad6497c0cc54ce1ba294018df235fd04c05f78ff4b2875355c869e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 05:14:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:14:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:14:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:14:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:14:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf535e6e6dac74843a7e616f25588639c67cdbace676c8d2c19467b79988135d-merged.mount: Deactivated successfully.
Nov 29 05:14:11 compute-0 podman[116845]: 2025-11-29 05:14:11.396166735 +0000 UTC m=+0.076958922 container remove 3996d17cdad6497c0cc54ce1ba294018df235fd04c05f78ff4b2875355c869e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:14:11 compute-0 systemd[1]: libpod-conmon-3996d17cdad6497c0cc54ce1ba294018df235fd04c05f78ff4b2875355c869e6.scope: Deactivated successfully.
Nov 29 05:14:11 compute-0 podman[116867]: 2025-11-29 05:14:11.614918963 +0000 UTC m=+0.071407586 container create c4c3c25dade75d74dffdea4ad101839bc47cda838b82830a16bf5e5c3240dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:14:11 compute-0 systemd[1]: Started libpod-conmon-c4c3c25dade75d74dffdea4ad101839bc47cda838b82830a16bf5e5c3240dbda.scope.
Nov 29 05:14:11 compute-0 podman[116867]: 2025-11-29 05:14:11.585962331 +0000 UTC m=+0.042450984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:14:11 compute-0 ceph-mon[75176]: 5.1d scrub starts
Nov 29 05:14:11 compute-0 ceph-mon[75176]: 5.1d scrub ok
Nov 29 05:14:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fba0ba342eadf747c92e27a2ff93426538248c946ca7d582718c7d67978477/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:11 compute-0 sshd-session[113407]: Connection closed by 192.168.122.30 port 46706
Nov 29 05:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fba0ba342eadf747c92e27a2ff93426538248c946ca7d582718c7d67978477/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fba0ba342eadf747c92e27a2ff93426538248c946ca7d582718c7d67978477/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53fba0ba342eadf747c92e27a2ff93426538248c946ca7d582718c7d67978477/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:14:11 compute-0 sshd-session[113404]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:14:11 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 29 05:14:11 compute-0 systemd[1]: session-35.scope: Consumed 18.515s CPU time.
Nov 29 05:14:11 compute-0 systemd-logind[793]: Session 35 logged out. Waiting for processes to exit.
Nov 29 05:14:11 compute-0 systemd-logind[793]: Removed session 35.
Nov 29 05:14:11 compute-0 podman[116867]: 2025-11-29 05:14:11.74335577 +0000 UTC m=+0.199844473 container init c4c3c25dade75d74dffdea4ad101839bc47cda838b82830a16bf5e5c3240dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:14:11 compute-0 podman[116867]: 2025-11-29 05:14:11.754940225 +0000 UTC m=+0.211428838 container start c4c3c25dade75d74dffdea4ad101839bc47cda838b82830a16bf5e5c3240dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:14:11 compute-0 podman[116867]: 2025-11-29 05:14:11.759703122 +0000 UTC m=+0.216191705 container attach c4c3c25dade75d74dffdea4ad101839bc47cda838b82830a16bf5e5c3240dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_visvesvaraya, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:14:12 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 29 05:14:12 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 29 05:14:12 compute-0 ceph-mon[75176]: pgmap v276: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]: {
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "osd_id": 0,
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "type": "bluestore"
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:     },
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "osd_id": 1,
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "type": "bluestore"
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:     },
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "osd_id": 2,
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:         "type": "bluestore"
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]:     }
Nov 29 05:14:12 compute-0 sleepy_visvesvaraya[116884]: }
Nov 29 05:14:12 compute-0 systemd[1]: libpod-c4c3c25dade75d74dffdea4ad101839bc47cda838b82830a16bf5e5c3240dbda.scope: Deactivated successfully.
Nov 29 05:14:12 compute-0 systemd[1]: libpod-c4c3c25dade75d74dffdea4ad101839bc47cda838b82830a16bf5e5c3240dbda.scope: Consumed 1.142s CPU time.
Nov 29 05:14:12 compute-0 podman[116867]: 2025-11-29 05:14:12.887852563 +0000 UTC m=+1.344341156 container died c4c3c25dade75d74dffdea4ad101839bc47cda838b82830a16bf5e5c3240dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_visvesvaraya, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-53fba0ba342eadf747c92e27a2ff93426538248c946ca7d582718c7d67978477-merged.mount: Deactivated successfully.
Nov 29 05:14:12 compute-0 podman[116867]: 2025-11-29 05:14:12.961230176 +0000 UTC m=+1.417718759 container remove c4c3c25dade75d74dffdea4ad101839bc47cda838b82830a16bf5e5c3240dbda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_visvesvaraya, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:14:12 compute-0 systemd[1]: libpod-conmon-c4c3c25dade75d74dffdea4ad101839bc47cda838b82830a16bf5e5c3240dbda.scope: Deactivated successfully.
Nov 29 05:14:13 compute-0 sudo[116734]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:14:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:14:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:14:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:14:13 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 62a8967c-7c5c-488d-b4df-cebb0de1de5e does not exist
Nov 29 05:14:13 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 2649ec7a-df15-4197-a614-e9afa012f8cf does not exist
Nov 29 05:14:13 compute-0 sudo[116929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:14:13 compute-0 sudo[116929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:13 compute-0 sudo[116929]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:13 compute-0 sudo[116954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:14:13 compute-0 sudo[116954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:14:13 compute-0 sudo[116954]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:13 compute-0 ceph-mon[75176]: 11.b scrub starts
Nov 29 05:14:13 compute-0 ceph-mon[75176]: 11.b scrub ok
Nov 29 05:14:13 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:14:13 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:14:13 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 29 05:14:13 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 29 05:14:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:14 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 29 05:14:14 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 29 05:14:14 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 29 05:14:14 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 29 05:14:14 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 29 05:14:14 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 29 05:14:15 compute-0 ceph-mon[75176]: pgmap v277: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:15 compute-0 ceph-mon[75176]: 5.c scrub starts
Nov 29 05:14:15 compute-0 ceph-mon[75176]: 5.c scrub ok
Nov 29 05:14:15 compute-0 ceph-mon[75176]: 11.2 scrub starts
Nov 29 05:14:15 compute-0 ceph-mon[75176]: 11.2 scrub ok
Nov 29 05:14:15 compute-0 ceph-mon[75176]: 3.17 scrub starts
Nov 29 05:14:15 compute-0 ceph-mon[75176]: 3.17 scrub ok
Nov 29 05:14:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:16 compute-0 ceph-mon[75176]: 2.7 scrub starts
Nov 29 05:14:16 compute-0 ceph-mon[75176]: 2.7 scrub ok
Nov 29 05:14:16 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 29 05:14:16 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 29 05:14:17 compute-0 ceph-mon[75176]: pgmap v278: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:17 compute-0 ceph-mon[75176]: 8.2 scrub starts
Nov 29 05:14:17 compute-0 ceph-mon[75176]: 8.2 scrub ok
Nov 29 05:14:17 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 29 05:14:17 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 29 05:14:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:17 compute-0 sshd-session[116979]: Accepted publickey for zuul from 192.168.122.30 port 33768 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:14:17 compute-0 systemd-logind[793]: New session 36 of user zuul.
Nov 29 05:14:17 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 29 05:14:17 compute-0 sshd-session[116979]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:14:18 compute-0 ceph-mon[75176]: 7.8 scrub starts
Nov 29 05:14:18 compute-0 ceph-mon[75176]: 7.8 scrub ok
Nov 29 05:14:18 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 29 05:14:18 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 29 05:14:19 compute-0 python3.9[117132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:14:19 compute-0 ceph-mon[75176]: pgmap v279: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:19 compute-0 ceph-mon[75176]: 4.12 scrub starts
Nov 29 05:14:19 compute-0 ceph-mon[75176]: 4.12 scrub ok
Nov 29 05:14:19 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 29 05:14:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:19 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 29 05:14:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:19 compute-0 python3.9[117286]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:14:20 compute-0 ceph-mon[75176]: 3.5 scrub starts
Nov 29 05:14:20 compute-0 ceph-mon[75176]: 3.5 scrub ok
Nov 29 05:14:20 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 29 05:14:20 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 29 05:14:21 compute-0 ceph-mon[75176]: pgmap v280: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:21 compute-0 ceph-mon[75176]: 10.f scrub starts
Nov 29 05:14:21 compute-0 ceph-mon[75176]: 10.f scrub ok
Nov 29 05:14:21 compute-0 python3.9[117479]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:14:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:21 compute-0 sshd-session[116982]: Connection closed by 192.168.122.30 port 33768
Nov 29 05:14:21 compute-0 sshd-session[116979]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:14:21 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 29 05:14:21 compute-0 systemd[1]: session-36.scope: Consumed 2.540s CPU time.
Nov 29 05:14:21 compute-0 systemd-logind[793]: Session 36 logged out. Waiting for processes to exit.
Nov 29 05:14:21 compute-0 systemd-logind[793]: Removed session 36.
Nov 29 05:14:22 compute-0 ceph-mon[75176]: pgmap v281: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:23 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Nov 29 05:14:23 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Nov 29 05:14:23 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 29 05:14:23 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 29 05:14:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:24 compute-0 ceph-mon[75176]: 8.1f deep-scrub starts
Nov 29 05:14:24 compute-0 ceph-mon[75176]: 8.1f deep-scrub ok
Nov 29 05:14:24 compute-0 ceph-mon[75176]: 3.e scrub starts
Nov 29 05:14:24 compute-0 ceph-mon[75176]: 3.e scrub ok
Nov 29 05:14:24 compute-0 ceph-mon[75176]: pgmap v282: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:24 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 29 05:14:24 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 29 05:14:25 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 29 05:14:25 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 29 05:14:25 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 29 05:14:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:25 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 29 05:14:25 compute-0 ceph-mon[75176]: 4.14 scrub starts
Nov 29 05:14:25 compute-0 ceph-mon[75176]: 4.14 scrub ok
Nov 29 05:14:26 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 29 05:14:26 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 29 05:14:26 compute-0 ceph-mon[75176]: 7.13 scrub starts
Nov 29 05:14:26 compute-0 ceph-mon[75176]: 7.13 scrub ok
Nov 29 05:14:26 compute-0 ceph-mon[75176]: 7.a scrub starts
Nov 29 05:14:26 compute-0 ceph-mon[75176]: pgmap v283: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:26 compute-0 ceph-mon[75176]: 7.a scrub ok
Nov 29 05:14:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:27 compute-0 ceph-mon[75176]: 11.3 scrub starts
Nov 29 05:14:27 compute-0 ceph-mon[75176]: 11.3 scrub ok
Nov 29 05:14:27 compute-0 sshd-session[117505]: Accepted publickey for zuul from 192.168.122.30 port 59024 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:14:27 compute-0 systemd-logind[793]: New session 37 of user zuul.
Nov 29 05:14:27 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 29 05:14:27 compute-0 sshd-session[117505]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:14:28 compute-0 ceph-mon[75176]: pgmap v284: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:28 compute-0 python3.9[117658]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:14:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:29 compute-0 python3.9[117812]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:14:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 29 05:14:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 29 05:14:30 compute-0 sudo[117966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhzehfgauarzaplpmwajalazuajhcsup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393270.0439808-40-190246989861828/AnsiballZ_setup.py'
Nov 29 05:14:30 compute-0 sudo[117966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:30 compute-0 ceph-mon[75176]: pgmap v285: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:30 compute-0 python3.9[117968]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:14:31 compute-0 sudo[117966]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:31 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 29 05:14:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:31 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 29 05:14:31 compute-0 ceph-mon[75176]: 8.4 scrub starts
Nov 29 05:14:31 compute-0 ceph-mon[75176]: 8.4 scrub ok
Nov 29 05:14:31 compute-0 sudo[118050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihtblehmiasssztedyhbmdhaiovvvcvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393270.0439808-40-190246989861828/AnsiballZ_dnf.py'
Nov 29 05:14:31 compute-0 sudo[118050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:31 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 29 05:14:31 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 29 05:14:31 compute-0 python3.9[118052]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:14:32 compute-0 ceph-mon[75176]: 8.1b scrub starts
Nov 29 05:14:32 compute-0 ceph-mon[75176]: pgmap v286: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:32 compute-0 ceph-mon[75176]: 8.1b scrub ok
Nov 29 05:14:32 compute-0 ceph-mon[75176]: 4.9 scrub starts
Nov 29 05:14:32 compute-0 sudo[118050]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:33 compute-0 sudo[118203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzixfoqsoglkntxmdkxtyhuvaywsajhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393273.157575-52-100258491401379/AnsiballZ_setup.py'
Nov 29 05:14:33 compute-0 sudo[118203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:33 compute-0 ceph-mon[75176]: 4.9 scrub ok
Nov 29 05:14:33 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 29 05:14:33 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 29 05:14:33 compute-0 python3.9[118205]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:14:34 compute-0 sudo[118203]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:34 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 29 05:14:34 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 29 05:14:34 compute-0 ceph-mon[75176]: pgmap v287: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:34 compute-0 ceph-mon[75176]: 5.f scrub starts
Nov 29 05:14:34 compute-0 ceph-mon[75176]: 5.f scrub ok
Nov 29 05:14:34 compute-0 sudo[118398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddkonilwrpvdgnbezfqmsylhdtyhopmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393274.3540149-63-159198614275095/AnsiballZ_file.py'
Nov 29 05:14:34 compute-0 sudo[118398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:34 compute-0 python3.9[118400]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:14:35 compute-0 sudo[118398]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:35 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Nov 29 05:14:35 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Nov 29 05:14:35 compute-0 ceph-mon[75176]: 3.11 scrub starts
Nov 29 05:14:35 compute-0 ceph-mon[75176]: 3.11 scrub ok
Nov 29 05:14:35 compute-0 sudo[118550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydlwlnxaesohkfczcxgelxotonmkhxua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393275.1974185-71-31197176304518/AnsiballZ_command.py'
Nov 29 05:14:35 compute-0 sudo[118550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:35 compute-0 python3.9[118552]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:14:35 compute-0 sudo[118550]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:36 compute-0 sudo[118715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akymzthosxgrunohcpjvxnvwvwvfiebt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393276.0419936-79-167552061891798/AnsiballZ_stat.py'
Nov 29 05:14:36 compute-0 sudo[118715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:36 compute-0 ceph-mon[75176]: pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:36 compute-0 ceph-mon[75176]: 11.18 deep-scrub starts
Nov 29 05:14:36 compute-0 ceph-mon[75176]: 11.18 deep-scrub ok
Nov 29 05:14:36 compute-0 python3.9[118717]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:14:36 compute-0 sudo[118715]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:36 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 29 05:14:36 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 29 05:14:37 compute-0 sudo[118793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eckmamonvrygaqdjfwblbpfydjpzycni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393276.0419936-79-167552061891798/AnsiballZ_file.py'
Nov 29 05:14:37 compute-0 sudo[118793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:37 compute-0 python3.9[118795]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:14:37 compute-0 sudo[118793]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:37 compute-0 ceph-mon[75176]: 9.1b scrub starts
Nov 29 05:14:37 compute-0 ceph-mon[75176]: 9.1b scrub ok
Nov 29 05:14:37 compute-0 sudo[118945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqwhalzjmzvtpopkgtkvsvmftsrfvbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393277.4805965-91-244871406174628/AnsiballZ_stat.py'
Nov 29 05:14:37 compute-0 sudo[118945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:38 compute-0 python3.9[118947]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:14:38 compute-0 sudo[118945]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:38 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 29 05:14:38 compute-0 sudo[119023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uojznacumchchdltigzubmyepaxxlkkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393277.4805965-91-244871406174628/AnsiballZ_file.py'
Nov 29 05:14:38 compute-0 sudo[119023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:38 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 29 05:14:38 compute-0 ceph-mon[75176]: pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:38 compute-0 python3.9[119025]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:14:38 compute-0 sudo[119023]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:39 compute-0 sudo[119175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtqvcjxghpbvtqiaqunudraotxkreunj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393278.8872108-104-269548513296859/AnsiballZ_ini_file.py'
Nov 29 05:14:39 compute-0 sudo[119175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:39 compute-0 ceph-mon[75176]: 7.15 deep-scrub starts
Nov 29 05:14:39 compute-0 ceph-mon[75176]: 7.15 deep-scrub ok
Nov 29 05:14:39 compute-0 python3.9[119177]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:14:39 compute-0 sudo[119175]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:40 compute-0 sudo[119327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyutyhjcgxmaqodmahmiuhuwyidahrhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393279.7659543-104-22385052076628/AnsiballZ_ini_file.py'
Nov 29 05:14:40 compute-0 sudo[119327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:40 compute-0 python3.9[119329]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:14:40 compute-0 sudo[119327]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:40 compute-0 ceph-mon[75176]: pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:40 compute-0 sudo[119479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsrkzgyqryrexvlupriioaglxdfsvxwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393280.5387423-104-33453152726911/AnsiballZ_ini_file.py'
Nov 29 05:14:40 compute-0 sudo[119479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:40 compute-0 python3.9[119481]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:14:41 compute-0 sudo[119479]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:14:41
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'backups', 'default.rgw.log', '.rgw.root', 'vms']
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:14:41 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:14:41 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 29 05:14:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:41 compute-0 sudo[119631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqpvocbqrejkfjgqitqktitwpddsfmtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393281.1975152-104-59098151000047/AnsiballZ_ini_file.py'
Nov 29 05:14:41 compute-0 sudo[119631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:41 compute-0 python3.9[119633]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:14:41 compute-0 sudo[119631]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:41 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 29 05:14:41 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 29 05:14:42 compute-0 sudo[119783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxaiwmzycxknpefwauplismrdrdygvhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393282.1867764-135-87113841414266/AnsiballZ_dnf.py'
Nov 29 05:14:42 compute-0 sudo[119783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:42 compute-0 ceph-mon[75176]: 11.1a scrub starts
Nov 29 05:14:42 compute-0 ceph-mon[75176]: 11.1a scrub ok
Nov 29 05:14:42 compute-0 ceph-mon[75176]: pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:42 compute-0 ceph-mon[75176]: 9.1 scrub starts
Nov 29 05:14:42 compute-0 ceph-mon[75176]: 9.1 scrub ok
Nov 29 05:14:42 compute-0 python3.9[119785]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:14:42 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 29 05:14:42 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 29 05:14:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:43 compute-0 ceph-mon[75176]: 9.11 scrub starts
Nov 29 05:14:43 compute-0 ceph-mon[75176]: 9.11 scrub ok
Nov 29 05:14:44 compute-0 sudo[119783]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:44 compute-0 ceph-mon[75176]: pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:44 compute-0 sudo[119936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plqvadfwwhtviztjpjqkjkgzcgufgtwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393284.560757-146-85047455185684/AnsiballZ_setup.py'
Nov 29 05:14:44 compute-0 sudo[119936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:45 compute-0 python3.9[119938]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:14:45 compute-0 sudo[119936]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:45 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 29 05:14:45 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 29 05:14:45 compute-0 sudo[120090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgsykzdswndkkadrbwqwutgjglhznevf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393285.5405865-154-67397258600427/AnsiballZ_stat.py'
Nov 29 05:14:45 compute-0 sudo[120090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:46 compute-0 python3.9[120092]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:14:46 compute-0 sudo[120090]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:46 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 29 05:14:46 compute-0 sudo[120242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmsayenzxgrefeexzoccoxzmknaxacai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393286.2263632-163-176274490995090/AnsiballZ_stat.py'
Nov 29 05:14:46 compute-0 sudo[120242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:46 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 29 05:14:46 compute-0 ceph-mon[75176]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:46 compute-0 ceph-mon[75176]: 10.b scrub starts
Nov 29 05:14:46 compute-0 ceph-mon[75176]: 10.b scrub ok
Nov 29 05:14:46 compute-0 python3.9[120244]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:14:46 compute-0 sudo[120242]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:47 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 29 05:14:47 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 29 05:14:47 compute-0 sudo[120394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avfbnzdotplcnsizyybqjdqpnmsigaua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393287.1675563-173-146119026393919/AnsiballZ_command.py'
Nov 29 05:14:47 compute-0 sudo[120394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:47 compute-0 ceph-mon[75176]: 4.10 scrub starts
Nov 29 05:14:47 compute-0 ceph-mon[75176]: 4.10 scrub ok
Nov 29 05:14:47 compute-0 python3.9[120396]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:14:47 compute-0 sudo[120394]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:48 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 29 05:14:48 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 29 05:14:48 compute-0 sudo[120547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kouhtlirnbcdoucrhrngoykczwzxsuza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393288.063494-183-50310127284861/AnsiballZ_service_facts.py'
Nov 29 05:14:48 compute-0 sudo[120547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:48 compute-0 ceph-mon[75176]: pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:48 compute-0 ceph-mon[75176]: 11.1c scrub starts
Nov 29 05:14:48 compute-0 ceph-mon[75176]: 11.1c scrub ok
Nov 29 05:14:48 compute-0 python3.9[120549]: ansible-service_facts Invoked
Nov 29 05:14:48 compute-0 network[120566]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 05:14:48 compute-0 network[120567]: 'network-scripts' will be removed from distribution in near future.
Nov 29 05:14:48 compute-0 network[120568]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 05:14:48 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 29 05:14:48 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 29 05:14:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:49 compute-0 ceph-mon[75176]: 2.5 scrub starts
Nov 29 05:14:49 compute-0 ceph-mon[75176]: 2.5 scrub ok
Nov 29 05:14:49 compute-0 ceph-mon[75176]: 9.d scrub starts
Nov 29 05:14:49 compute-0 ceph-mon[75176]: 9.d scrub ok
Nov 29 05:14:49 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.b deep-scrub starts
Nov 29 05:14:49 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.b deep-scrub ok
Nov 29 05:14:50 compute-0 ceph-mon[75176]: pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:50 compute-0 ceph-mon[75176]: 9.b deep-scrub starts
Nov 29 05:14:50 compute-0 ceph-mon[75176]: 9.b deep-scrub ok
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:14:51 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 29 05:14:51 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 29 05:14:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:52 compute-0 ceph-mon[75176]: 11.1e scrub starts
Nov 29 05:14:52 compute-0 ceph-mon[75176]: 11.1e scrub ok
Nov 29 05:14:52 compute-0 ceph-mon[75176]: pgmap v296: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:14:53.708133) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393293708366, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7200, "num_deletes": 251, "total_data_size": 9311174, "memory_usage": 9549328, "flush_reason": "Manual Compaction"}
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393293769185, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7470107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 132, "largest_seqno": 7329, "table_properties": {"data_size": 7443558, "index_size": 17346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 75287, "raw_average_key_size": 23, "raw_value_size": 7381075, "raw_average_value_size": 2277, "num_data_blocks": 762, "num_entries": 3241, "num_filter_entries": 3241, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392874, "oldest_key_time": 1764392874, "file_creation_time": 1764393293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 61157 microseconds, and 27161 cpu microseconds.
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:14:53.769302) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7470107 bytes OK
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:14:53.769337) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:14:53.770792) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:14:53.770816) EVENT_LOG_v1 {"time_micros": 1764393293770809, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:14:53.770843) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9279879, prev total WAL file size 9279879, number of live WAL files 2.
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:14:53.774508) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7295KB) 13(50KB) 8(1944B)]
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393293774650, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7523841, "oldest_snapshot_seqno": -1}
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3053 keys, 7481018 bytes, temperature: kUnknown
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393293829407, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7481018, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7454902, "index_size": 17366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 73264, "raw_average_key_size": 23, "raw_value_size": 7394056, "raw_average_value_size": 2421, "num_data_blocks": 765, "num_entries": 3053, "num_filter_entries": 3053, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764393293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:14:53.829736) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7481018 bytes
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:14:53.831365) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.2 rd, 136.4 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.2, 0.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3342, records dropped: 289 output_compression: NoCompression
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:14:53.831397) EVENT_LOG_v1 {"time_micros": 1764393293831382, "job": 4, "event": "compaction_finished", "compaction_time_micros": 54851, "compaction_time_cpu_micros": 19478, "output_level": 6, "num_output_files": 1, "total_output_size": 7481018, "num_input_records": 3342, "num_output_records": 3053, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393293834533, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393293834649, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393293834711, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 29 05:14:53 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:14:53.774332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:14:53 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 29 05:14:53 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 29 05:14:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:54 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.d deep-scrub starts
Nov 29 05:14:54 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.d deep-scrub ok
Nov 29 05:14:54 compute-0 ceph-mon[75176]: pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:54 compute-0 ceph-mon[75176]: 9.9 scrub starts
Nov 29 05:14:54 compute-0 ceph-mon[75176]: 9.9 scrub ok
Nov 29 05:14:54 compute-0 ceph-mon[75176]: 4.d deep-scrub starts
Nov 29 05:14:54 compute-0 ceph-mon[75176]: 4.d deep-scrub ok
Nov 29 05:14:55 compute-0 sudo[120547]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:55 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 29 05:14:55 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 29 05:14:55 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 29 05:14:55 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 29 05:14:55 compute-0 ceph-mon[75176]: 6.1 scrub starts
Nov 29 05:14:55 compute-0 ceph-mon[75176]: 6.1 scrub ok
Nov 29 05:14:56 compute-0 sudo[120852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pebnczmjwketvthgzinalifghawzlrby ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764393295.9588451-198-229642963065654/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764393295.9588451-198-229642963065654/args'
Nov 29 05:14:56 compute-0 sudo[120852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:56 compute-0 sudo[120852]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:56 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 29 05:14:56 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 29 05:14:56 compute-0 ceph-mon[75176]: pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:56 compute-0 ceph-mon[75176]: 11.1b scrub starts
Nov 29 05:14:56 compute-0 ceph-mon[75176]: 11.1b scrub ok
Nov 29 05:14:56 compute-0 ceph-mon[75176]: 2.4 scrub starts
Nov 29 05:14:56 compute-0 ceph-mon[75176]: 2.4 scrub ok
Nov 29 05:14:57 compute-0 sudo[121019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juyfyqhnzjlmplecqcjyzpeolbjkmrjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393296.8885176-209-216469655806750/AnsiballZ_dnf.py'
Nov 29 05:14:57 compute-0 sudo[121019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:57 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 29 05:14:57 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 29 05:14:57 compute-0 python3.9[121021]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:14:57 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 29 05:14:57 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 29 05:14:57 compute-0 ceph-mon[75176]: 2.6 scrub starts
Nov 29 05:14:57 compute-0 ceph-mon[75176]: 2.6 scrub ok
Nov 29 05:14:58 compute-0 sudo[121019]: pam_unix(sudo:session): session closed for user root
Nov 29 05:14:58 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 29 05:14:58 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 29 05:14:58 compute-0 ceph-mon[75176]: pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:58 compute-0 ceph-mon[75176]: 7.11 scrub starts
Nov 29 05:14:58 compute-0 ceph-mon[75176]: 7.11 scrub ok
Nov 29 05:14:58 compute-0 ceph-mon[75176]: 4.f scrub starts
Nov 29 05:14:58 compute-0 ceph-mon[75176]: 4.f scrub ok
Nov 29 05:14:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:14:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:14:59 compute-0 sudo[121172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azfbdrbxtqarqwerhyzjsoqjqpdxfxyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393298.966531-222-134193105728460/AnsiballZ_package_facts.py'
Nov 29 05:14:59 compute-0 sudo[121172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:14:59 compute-0 python3.9[121174]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 05:15:00 compute-0 sudo[121172]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:00 compute-0 ceph-mon[75176]: pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:00 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 29 05:15:00 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 29 05:15:00 compute-0 sudo[121324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dselebejhrqvogqpoqspbpnmjtalosys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393300.5932777-232-135188811471206/AnsiballZ_stat.py'
Nov 29 05:15:00 compute-0 sudo[121324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:01 compute-0 python3.9[121326]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:01 compute-0 sudo[121324]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:01 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 29 05:15:01 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 29 05:15:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:01 compute-0 sudo[121402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sloxrlsavrcxxpxeyhbhlgwbbrwtngvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393300.5932777-232-135188811471206/AnsiballZ_file.py'
Nov 29 05:15:01 compute-0 sudo[121402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:01 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 29 05:15:01 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 29 05:15:01 compute-0 python3.9[121404]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:01 compute-0 sudo[121402]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:01 compute-0 ceph-mon[75176]: 9.5 scrub starts
Nov 29 05:15:01 compute-0 ceph-mon[75176]: 9.5 scrub ok
Nov 29 05:15:01 compute-0 ceph-mon[75176]: 10.2 scrub starts
Nov 29 05:15:01 compute-0 ceph-mon[75176]: 10.2 scrub ok
Nov 29 05:15:01 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 29 05:15:02 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 29 05:15:02 compute-0 sudo[121554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yynmznqxyxpogtdytktuixeexgzwahbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393301.875534-244-182495730588164/AnsiballZ_stat.py'
Nov 29 05:15:02 compute-0 sudo[121554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:02 compute-0 python3.9[121556]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:02 compute-0 sudo[121554]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:02 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 29 05:15:02 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 29 05:15:02 compute-0 sudo[121632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puyexcawuiuttftpppifmsymqovcotcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393301.875534-244-182495730588164/AnsiballZ_file.py'
Nov 29 05:15:02 compute-0 sudo[121632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:02 compute-0 ceph-mon[75176]: 3.16 scrub starts
Nov 29 05:15:02 compute-0 ceph-mon[75176]: 3.16 scrub ok
Nov 29 05:15:02 compute-0 ceph-mon[75176]: pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:02 compute-0 ceph-mon[75176]: 9.3 scrub starts
Nov 29 05:15:02 compute-0 ceph-mon[75176]: 9.3 scrub ok
Nov 29 05:15:02 compute-0 ceph-mon[75176]: 10.14 scrub starts
Nov 29 05:15:02 compute-0 ceph-mon[75176]: 10.14 scrub ok
Nov 29 05:15:02 compute-0 python3.9[121634]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:02 compute-0 sudo[121632]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:03 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 29 05:15:03 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 29 05:15:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:03 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 29 05:15:03 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 29 05:15:03 compute-0 ceph-mon[75176]: 9.1d scrub starts
Nov 29 05:15:03 compute-0 ceph-mon[75176]: 2.9 scrub starts
Nov 29 05:15:03 compute-0 ceph-mon[75176]: 2.9 scrub ok
Nov 29 05:15:03 compute-0 sudo[121784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljgsuykpfybrousdtlwwphjsbzykjava ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393303.4084811-262-27382346423835/AnsiballZ_lineinfile.py'
Nov 29 05:15:03 compute-0 sudo[121784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:04 compute-0 python3.9[121786]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:04 compute-0 sudo[121784]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:04 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 29 05:15:04 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 29 05:15:04 compute-0 ceph-mon[75176]: 9.1d scrub ok
Nov 29 05:15:04 compute-0 ceph-mon[75176]: pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:04 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 29 05:15:05 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 29 05:15:05 compute-0 sudo[121936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njflftuneatkaecqczdimpicpqtftpkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393304.806445-277-96031943176805/AnsiballZ_setup.py'
Nov 29 05:15:05 compute-0 sudo[121936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:05 compute-0 python3.9[121938]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:15:05 compute-0 sudo[121936]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:05 compute-0 ceph-mon[75176]: 8.1c scrub starts
Nov 29 05:15:05 compute-0 ceph-mon[75176]: 8.1c scrub ok
Nov 29 05:15:05 compute-0 ceph-mon[75176]: 6.3 scrub starts
Nov 29 05:15:05 compute-0 ceph-mon[75176]: 6.3 scrub ok
Nov 29 05:15:06 compute-0 sudo[122020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvongdkuztoicmqducwnkzyffeepquci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393304.806445-277-96031943176805/AnsiballZ_systemd.py'
Nov 29 05:15:06 compute-0 sudo[122020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:06 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 29 05:15:06 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 29 05:15:06 compute-0 python3.9[122022]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:15:06 compute-0 sudo[122020]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:06 compute-0 ceph-mon[75176]: pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:06 compute-0 ceph-mon[75176]: 2.1b scrub starts
Nov 29 05:15:06 compute-0 ceph-mon[75176]: 2.1b scrub ok
Nov 29 05:15:06 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 29 05:15:07 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 29 05:15:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:07 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 29 05:15:07 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 29 05:15:07 compute-0 sshd-session[117508]: Connection closed by 192.168.122.30 port 59024
Nov 29 05:15:07 compute-0 sshd-session[117505]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:15:07 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 29 05:15:07 compute-0 systemd[1]: session-37.scope: Consumed 25.367s CPU time.
Nov 29 05:15:07 compute-0 systemd-logind[793]: Session 37 logged out. Waiting for processes to exit.
Nov 29 05:15:07 compute-0 systemd-logind[793]: Removed session 37.
Nov 29 05:15:07 compute-0 ceph-mon[75176]: 6.7 scrub starts
Nov 29 05:15:07 compute-0 ceph-mon[75176]: 6.7 scrub ok
Nov 29 05:15:07 compute-0 ceph-mon[75176]: 5.18 scrub starts
Nov 29 05:15:07 compute-0 ceph-mon[75176]: 5.18 scrub ok
Nov 29 05:15:08 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 29 05:15:08 compute-0 sshd-session[122049]: Invalid user gns3 from 101.47.141.125 port 34620
Nov 29 05:15:08 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 29 05:15:08 compute-0 ceph-mon[75176]: pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:08 compute-0 ceph-mon[75176]: 10.13 scrub starts
Nov 29 05:15:08 compute-0 ceph-mon[75176]: 10.13 scrub ok
Nov 29 05:15:09 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.5 deep-scrub starts
Nov 29 05:15:09 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.5 deep-scrub ok
Nov 29 05:15:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:09 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 29 05:15:09 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 29 05:15:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:09 compute-0 sshd-session[122049]: Received disconnect from 101.47.141.125 port 34620:11: Bye Bye [preauth]
Nov 29 05:15:09 compute-0 sshd-session[122049]: Disconnected from invalid user gns3 101.47.141.125 port 34620 [preauth]
Nov 29 05:15:10 compute-0 ceph-mon[75176]: 6.5 deep-scrub starts
Nov 29 05:15:10 compute-0 ceph-mon[75176]: 6.5 deep-scrub ok
Nov 29 05:15:10 compute-0 ceph-mon[75176]: 11.1f scrub starts
Nov 29 05:15:10 compute-0 ceph-mon[75176]: 11.1f scrub ok
Nov 29 05:15:10 compute-0 ceph-mon[75176]: pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:10 compute-0 sshd-session[122051]: Received disconnect from 114.66.38.28 port 56544:11:  [preauth]
Nov 29 05:15:10 compute-0 sshd-session[122051]: Disconnected from authenticating user root 114.66.38.28 port 56544 [preauth]
Nov 29 05:15:11 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 29 05:15:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:15:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:15:11 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 29 05:15:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:15:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:15:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:15:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:15:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:11 compute-0 ceph-mon[75176]: 2.a scrub starts
Nov 29 05:15:11 compute-0 ceph-mon[75176]: 2.a scrub ok
Nov 29 05:15:12 compute-0 ceph-mon[75176]: pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:12 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 29 05:15:12 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 29 05:15:13 compute-0 sudo[122053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:15:13 compute-0 sudo[122053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:13 compute-0 sudo[122053]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:13 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Nov 29 05:15:13 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Nov 29 05:15:13 compute-0 sudo[122078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:15:13 compute-0 sudo[122078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:13 compute-0 sudo[122078]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:13 compute-0 sudo[122103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:15:13 compute-0 sudo[122103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:13 compute-0 sudo[122103]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:13 compute-0 sudo[122130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:15:13 compute-0 sudo[122130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:13 compute-0 sshd-session[122126]: Accepted publickey for zuul from 192.168.122.30 port 49666 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:15:13 compute-0 systemd-logind[793]: New session 38 of user zuul.
Nov 29 05:15:13 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 29 05:15:13 compute-0 sshd-session[122126]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:15:13 compute-0 ceph-mon[75176]: 6.9 scrub starts
Nov 29 05:15:13 compute-0 ceph-mon[75176]: 6.9 scrub ok
Nov 29 05:15:13 compute-0 ceph-mon[75176]: 5.19 deep-scrub starts
Nov 29 05:15:13 compute-0 ceph-mon[75176]: 5.19 deep-scrub ok
Nov 29 05:15:14 compute-0 sudo[122130]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:15:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:15:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:15:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:15:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:15:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:15:14 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev b54a0450-c59a-4a93-a169-e21c5b2bbfe1 does not exist
Nov 29 05:15:14 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev aa7ec79e-3622-401e-a72f-1d5c6b7acece does not exist
Nov 29 05:15:14 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 9035e47d-79f8-4b41-886f-5f0c45023af8 does not exist
Nov 29 05:15:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:15:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:15:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:15:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:15:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:15:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:15:14 compute-0 sudo[122287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:15:14 compute-0 sudo[122287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:14 compute-0 sudo[122287]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:14 compute-0 sudo[122330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:15:14 compute-0 sudo[122330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:14 compute-0 sudo[122330]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:14 compute-0 sudo[122393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtcuoqmgqgkrvxtgprticijqomcxvqtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393313.7236083-22-132084892196735/AnsiballZ_file.py'
Nov 29 05:15:14 compute-0 sudo[122393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:14 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 29 05:15:14 compute-0 sudo[122385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:15:14 compute-0 sudo[122385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:14 compute-0 sudo[122385]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:14 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 29 05:15:14 compute-0 sudo[122415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:15:14 compute-0 sudo[122415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:14 compute-0 python3.9[122410]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:14 compute-0 sudo[122393]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:14 compute-0 ceph-mon[75176]: pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:15:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:15:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:15:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:15:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:15:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:15:14 compute-0 podman[122548]: 2025-11-29 05:15:14.931617284 +0000 UTC m=+0.060832569 container create fa8d2ef276f7d0f4febfb1b65dd2d93b51aa9e30f6573b5fcb3ee4505927e147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_neumann, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:15:14 compute-0 systemd[1]: Started libpod-conmon-fa8d2ef276f7d0f4febfb1b65dd2d93b51aa9e30f6573b5fcb3ee4505927e147.scope.
Nov 29 05:15:15 compute-0 podman[122548]: 2025-11-29 05:15:14.909343976 +0000 UTC m=+0.038559321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:15:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:15:15 compute-0 podman[122548]: 2025-11-29 05:15:15.035183383 +0000 UTC m=+0.164398638 container init fa8d2ef276f7d0f4febfb1b65dd2d93b51aa9e30f6573b5fcb3ee4505927e147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_neumann, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:15:15 compute-0 podman[122548]: 2025-11-29 05:15:15.043278828 +0000 UTC m=+0.172494083 container start fa8d2ef276f7d0f4febfb1b65dd2d93b51aa9e30f6573b5fcb3ee4505927e147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:15:15 compute-0 podman[122548]: 2025-11-29 05:15:15.046470295 +0000 UTC m=+0.175685550 container attach fa8d2ef276f7d0f4febfb1b65dd2d93b51aa9e30f6573b5fcb3ee4505927e147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 05:15:15 compute-0 gallant_neumann[122573]: 167 167
Nov 29 05:15:15 compute-0 systemd[1]: libpod-fa8d2ef276f7d0f4febfb1b65dd2d93b51aa9e30f6573b5fcb3ee4505927e147.scope: Deactivated successfully.
Nov 29 05:15:15 compute-0 conmon[122573]: conmon fa8d2ef276f7d0f4febf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa8d2ef276f7d0f4febfb1b65dd2d93b51aa9e30f6573b5fcb3ee4505927e147.scope/container/memory.events
Nov 29 05:15:15 compute-0 podman[122548]: 2025-11-29 05:15:15.050101203 +0000 UTC m=+0.179316458 container died fa8d2ef276f7d0f4febfb1b65dd2d93b51aa9e30f6573b5fcb3ee4505927e147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_neumann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:15:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d516c0717990ad3ca4a2f8e0ccd1d5bd3ed930a5078c20a638e4c644e3fdff4-merged.mount: Deactivated successfully.
Nov 29 05:15:15 compute-0 podman[122548]: 2025-11-29 05:15:15.087579118 +0000 UTC m=+0.216794373 container remove fa8d2ef276f7d0f4febfb1b65dd2d93b51aa9e30f6573b5fcb3ee4505927e147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_neumann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:15:15 compute-0 systemd[1]: libpod-conmon-fa8d2ef276f7d0f4febfb1b65dd2d93b51aa9e30f6573b5fcb3ee4505927e147.scope: Deactivated successfully.
Nov 29 05:15:15 compute-0 podman[122622]: 2025-11-29 05:15:15.250465878 +0000 UTC m=+0.037488816 container create b569796866bda5ed198f7c3c429a63bad14f4ea413a291de84c3d24b8d77ce01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 05:15:15 compute-0 systemd[1]: Started libpod-conmon-b569796866bda5ed198f7c3c429a63bad14f4ea413a291de84c3d24b8d77ce01.scope.
Nov 29 05:15:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb670db76e35fe1dd71b7c5f662f5788b7b33a0813a77b947957a3b7b399d55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:15 compute-0 podman[122622]: 2025-11-29 05:15:15.234101633 +0000 UTC m=+0.021124591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb670db76e35fe1dd71b7c5f662f5788b7b33a0813a77b947957a3b7b399d55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb670db76e35fe1dd71b7c5f662f5788b7b33a0813a77b947957a3b7b399d55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb670db76e35fe1dd71b7c5f662f5788b7b33a0813a77b947957a3b7b399d55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:15 compute-0 sudo[122691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcrtxdtmktchwrgqqlrgsspgzhiyqnia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393314.7966404-34-26016898255081/AnsiballZ_stat.py'
Nov 29 05:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb670db76e35fe1dd71b7c5f662f5788b7b33a0813a77b947957a3b7b399d55/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:15 compute-0 sudo[122691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:15 compute-0 podman[122622]: 2025-11-29 05:15:15.353289508 +0000 UTC m=+0.140312446 container init b569796866bda5ed198f7c3c429a63bad14f4ea413a291de84c3d24b8d77ce01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:15:15 compute-0 podman[122622]: 2025-11-29 05:15:15.364436537 +0000 UTC m=+0.151459475 container start b569796866bda5ed198f7c3c429a63bad14f4ea413a291de84c3d24b8d77ce01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:15:15 compute-0 podman[122622]: 2025-11-29 05:15:15.379105972 +0000 UTC m=+0.166128960 container attach b569796866bda5ed198f7c3c429a63bad14f4ea413a291de84c3d24b8d77ce01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:15:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:15 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 29 05:15:15 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 29 05:15:15 compute-0 python3.9[122693]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:15 compute-0 sudo[122691]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:15 compute-0 sudo[122771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwkcqwcpjaolrvljdfdjlzvjsusceqch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393314.7966404-34-26016898255081/AnsiballZ_file.py'
Nov 29 05:15:15 compute-0 sudo[122771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:15 compute-0 ceph-mon[75176]: 9.e scrub starts
Nov 29 05:15:15 compute-0 ceph-mon[75176]: 9.e scrub ok
Nov 29 05:15:15 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 29 05:15:15 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 29 05:15:16 compute-0 python3.9[122773]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:16 compute-0 sudo[122771]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:16 compute-0 sshd-session[122156]: Connection closed by 192.168.122.30 port 49666
Nov 29 05:15:16 compute-0 sshd-session[122126]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:15:16 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 29 05:15:16 compute-0 systemd[1]: session-38.scope: Consumed 1.779s CPU time.
Nov 29 05:15:16 compute-0 systemd-logind[793]: Session 38 logged out. Waiting for processes to exit.
Nov 29 05:15:16 compute-0 systemd-logind[793]: Removed session 38.
Nov 29 05:15:16 compute-0 hungry_rubin[122678]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:15:16 compute-0 hungry_rubin[122678]: --> relative data size: 1.0
Nov 29 05:15:16 compute-0 hungry_rubin[122678]: --> All data devices are unavailable
Nov 29 05:15:16 compute-0 systemd[1]: libpod-b569796866bda5ed198f7c3c429a63bad14f4ea413a291de84c3d24b8d77ce01.scope: Deactivated successfully.
Nov 29 05:15:16 compute-0 systemd[1]: libpod-b569796866bda5ed198f7c3c429a63bad14f4ea413a291de84c3d24b8d77ce01.scope: Consumed 1.085s CPU time.
Nov 29 05:15:16 compute-0 podman[122622]: 2025-11-29 05:15:16.523915246 +0000 UTC m=+1.310938224 container died b569796866bda5ed198f7c3c429a63bad14f4ea413a291de84c3d24b8d77ce01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rubin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 05:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fb670db76e35fe1dd71b7c5f662f5788b7b33a0813a77b947957a3b7b399d55-merged.mount: Deactivated successfully.
Nov 29 05:15:16 compute-0 podman[122622]: 2025-11-29 05:15:16.588208367 +0000 UTC m=+1.375231306 container remove b569796866bda5ed198f7c3c429a63bad14f4ea413a291de84c3d24b8d77ce01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rubin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 05:15:16 compute-0 systemd[1]: libpod-conmon-b569796866bda5ed198f7c3c429a63bad14f4ea413a291de84c3d24b8d77ce01.scope: Deactivated successfully.
Nov 29 05:15:16 compute-0 sudo[122415]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:16 compute-0 sudo[122834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:15:16 compute-0 sudo[122834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:16 compute-0 sudo[122834]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:16 compute-0 sudo[122859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:15:16 compute-0 sudo[122859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:16 compute-0 sudo[122859]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:16 compute-0 sudo[122884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:15:16 compute-0 sudo[122884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:16 compute-0 sudo[122884]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:16 compute-0 ceph-mon[75176]: pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:16 compute-0 ceph-mon[75176]: 9.6 scrub starts
Nov 29 05:15:16 compute-0 ceph-mon[75176]: 9.6 scrub ok
Nov 29 05:15:16 compute-0 ceph-mon[75176]: 6.a scrub starts
Nov 29 05:15:16 compute-0 ceph-mon[75176]: 6.a scrub ok
Nov 29 05:15:16 compute-0 sudo[122909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:15:16 compute-0 sudo[122909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:16 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.16 deep-scrub starts
Nov 29 05:15:16 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.16 deep-scrub ok
Nov 29 05:15:17 compute-0 podman[122974]: 2025-11-29 05:15:17.278989587 +0000 UTC m=+0.035790785 container create 10f2d54062652b12563e6aa52de4f0481c0dfa87ca4770595ae8953f0d480ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hertz, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:15:17 compute-0 systemd[1]: Started libpod-conmon-10f2d54062652b12563e6aa52de4f0481c0dfa87ca4770595ae8953f0d480ce6.scope.
Nov 29 05:15:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:15:17 compute-0 podman[122974]: 2025-11-29 05:15:17.353338811 +0000 UTC m=+0.110140079 container init 10f2d54062652b12563e6aa52de4f0481c0dfa87ca4770595ae8953f0d480ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hertz, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:15:17 compute-0 podman[122974]: 2025-11-29 05:15:17.263851562 +0000 UTC m=+0.020652780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:15:17 compute-0 podman[122974]: 2025-11-29 05:15:17.361820406 +0000 UTC m=+0.118621604 container start 10f2d54062652b12563e6aa52de4f0481c0dfa87ca4770595ae8953f0d480ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:15:17 compute-0 podman[122974]: 2025-11-29 05:15:17.365449152 +0000 UTC m=+0.122250430 container attach 10f2d54062652b12563e6aa52de4f0481c0dfa87ca4770595ae8953f0d480ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hertz, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 29 05:15:17 compute-0 tender_hertz[122990]: 167 167
Nov 29 05:15:17 compute-0 systemd[1]: libpod-10f2d54062652b12563e6aa52de4f0481c0dfa87ca4770595ae8953f0d480ce6.scope: Deactivated successfully.
Nov 29 05:15:17 compute-0 podman[122974]: 2025-11-29 05:15:17.367321858 +0000 UTC m=+0.124123046 container died 10f2d54062652b12563e6aa52de4f0481c0dfa87ca4770595ae8953f0d480ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hertz, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:15:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fc6dc710d9dac17bc1071feb94786dfe51a5e33e1c32815bb79002f48364a6d-merged.mount: Deactivated successfully.
Nov 29 05:15:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:17 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 29 05:15:17 compute-0 podman[122974]: 2025-11-29 05:15:17.405846607 +0000 UTC m=+0.162647805 container remove 10f2d54062652b12563e6aa52de4f0481c0dfa87ca4770595ae8953f0d480ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:15:17 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 29 05:15:17 compute-0 systemd[1]: libpod-conmon-10f2d54062652b12563e6aa52de4f0481c0dfa87ca4770595ae8953f0d480ce6.scope: Deactivated successfully.
Nov 29 05:15:17 compute-0 podman[123014]: 2025-11-29 05:15:17.637929128 +0000 UTC m=+0.075696468 container create ef4596382f34c7b2a694949ae9282bce5ceb537761b68c4e5c919c40b6b047bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:15:17 compute-0 systemd[1]: Started libpod-conmon-ef4596382f34c7b2a694949ae9282bce5ceb537761b68c4e5c919c40b6b047bd.scope.
Nov 29 05:15:17 compute-0 podman[123014]: 2025-11-29 05:15:17.60771741 +0000 UTC m=+0.045484800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:15:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e7249f01590526256745f709635c7bcd49e44256b97c9543eaeca61a0dd649/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e7249f01590526256745f709635c7bcd49e44256b97c9543eaeca61a0dd649/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e7249f01590526256745f709635c7bcd49e44256b97c9543eaeca61a0dd649/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e7249f01590526256745f709635c7bcd49e44256b97c9543eaeca61a0dd649/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:17 compute-0 podman[123014]: 2025-11-29 05:15:17.745939374 +0000 UTC m=+0.183706784 container init ef4596382f34c7b2a694949ae9282bce5ceb537761b68c4e5c919c40b6b047bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:15:17 compute-0 podman[123014]: 2025-11-29 05:15:17.758370004 +0000 UTC m=+0.196137304 container start ef4596382f34c7b2a694949ae9282bce5ceb537761b68c4e5c919c40b6b047bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 05:15:17 compute-0 podman[123014]: 2025-11-29 05:15:17.76313532 +0000 UTC m=+0.200902710 container attach ef4596382f34c7b2a694949ae9282bce5ceb537761b68c4e5c919c40b6b047bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 05:15:17 compute-0 ceph-mon[75176]: 9.16 deep-scrub starts
Nov 29 05:15:17 compute-0 ceph-mon[75176]: 9.16 deep-scrub ok
Nov 29 05:15:17 compute-0 ceph-mon[75176]: 5.1a scrub starts
Nov 29 05:15:17 compute-0 ceph-mon[75176]: 5.1a scrub ok
Nov 29 05:15:18 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 29 05:15:18 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 29 05:15:18 compute-0 brave_feynman[123031]: {
Nov 29 05:15:18 compute-0 brave_feynman[123031]:     "0": [
Nov 29 05:15:18 compute-0 brave_feynman[123031]:         {
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "devices": [
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "/dev/loop3"
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             ],
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_name": "ceph_lv0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_size": "21470642176",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "name": "ceph_lv0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "tags": {
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.cluster_name": "ceph",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.crush_device_class": "",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.encrypted": "0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.osd_id": "0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.type": "block",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.vdo": "0"
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             },
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "type": "block",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "vg_name": "ceph_vg0"
Nov 29 05:15:18 compute-0 brave_feynman[123031]:         }
Nov 29 05:15:18 compute-0 brave_feynman[123031]:     ],
Nov 29 05:15:18 compute-0 brave_feynman[123031]:     "1": [
Nov 29 05:15:18 compute-0 brave_feynman[123031]:         {
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "devices": [
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "/dev/loop4"
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             ],
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_name": "ceph_lv1",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_size": "21470642176",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "name": "ceph_lv1",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "tags": {
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.cluster_name": "ceph",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.crush_device_class": "",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.encrypted": "0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.osd_id": "1",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.type": "block",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.vdo": "0"
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             },
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "type": "block",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "vg_name": "ceph_vg1"
Nov 29 05:15:18 compute-0 brave_feynman[123031]:         }
Nov 29 05:15:18 compute-0 brave_feynman[123031]:     ],
Nov 29 05:15:18 compute-0 brave_feynman[123031]:     "2": [
Nov 29 05:15:18 compute-0 brave_feynman[123031]:         {
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "devices": [
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "/dev/loop5"
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             ],
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_name": "ceph_lv2",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_size": "21470642176",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "name": "ceph_lv2",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "tags": {
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.cluster_name": "ceph",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.crush_device_class": "",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.encrypted": "0",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.osd_id": "2",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.type": "block",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:                 "ceph.vdo": "0"
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             },
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "type": "block",
Nov 29 05:15:18 compute-0 brave_feynman[123031]:             "vg_name": "ceph_vg2"
Nov 29 05:15:18 compute-0 brave_feynman[123031]:         }
Nov 29 05:15:18 compute-0 brave_feynman[123031]:     ]
Nov 29 05:15:18 compute-0 brave_feynman[123031]: }
Nov 29 05:15:18 compute-0 systemd[1]: libpod-ef4596382f34c7b2a694949ae9282bce5ceb537761b68c4e5c919c40b6b047bd.scope: Deactivated successfully.
Nov 29 05:15:18 compute-0 podman[123014]: 2025-11-29 05:15:18.500591854 +0000 UTC m=+0.938359194 container died ef4596382f34c7b2a694949ae9282bce5ceb537761b68c4e5c919c40b6b047bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:15:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-22e7249f01590526256745f709635c7bcd49e44256b97c9543eaeca61a0dd649-merged.mount: Deactivated successfully.
Nov 29 05:15:18 compute-0 podman[123014]: 2025-11-29 05:15:18.565506821 +0000 UTC m=+1.003274151 container remove ef4596382f34c7b2a694949ae9282bce5ceb537761b68c4e5c919c40b6b047bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_feynman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:15:18 compute-0 systemd[1]: libpod-conmon-ef4596382f34c7b2a694949ae9282bce5ceb537761b68c4e5c919c40b6b047bd.scope: Deactivated successfully.
Nov 29 05:15:18 compute-0 sudo[122909]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:18 compute-0 sudo[123054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:15:18 compute-0 sudo[123054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:18 compute-0 sudo[123054]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:18 compute-0 sudo[123079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:15:18 compute-0 sudo[123079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:18 compute-0 sudo[123079]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:18 compute-0 sudo[123104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:15:18 compute-0 sudo[123104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:18 compute-0 sudo[123104]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:18 compute-0 ceph-mon[75176]: pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:18 compute-0 ceph-mon[75176]: 6.6 scrub starts
Nov 29 05:15:18 compute-0 ceph-mon[75176]: 6.6 scrub ok
Nov 29 05:15:18 compute-0 sudo[123129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:15:18 compute-0 sudo[123129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:18 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 29 05:15:18 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 29 05:15:19 compute-0 podman[123194]: 2025-11-29 05:15:19.280150956 +0000 UTC m=+0.038943501 container create 4c99c468356a9280957c050f9e4030264056b982fe563f4e0e8e4322abb6589b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:15:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:19 compute-0 systemd[1]: Started libpod-conmon-4c99c468356a9280957c050f9e4030264056b982fe563f4e0e8e4322abb6589b.scope.
Nov 29 05:15:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:15:19 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 29 05:15:19 compute-0 podman[123194]: 2025-11-29 05:15:19.347212684 +0000 UTC m=+0.106005249 container init 4c99c468356a9280957c050f9e4030264056b982fe563f4e0e8e4322abb6589b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 05:15:19 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 29 05:15:19 compute-0 podman[123194]: 2025-11-29 05:15:19.354336466 +0000 UTC m=+0.113129021 container start 4c99c468356a9280957c050f9e4030264056b982fe563f4e0e8e4322abb6589b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:15:19 compute-0 podman[123194]: 2025-11-29 05:15:19.263425502 +0000 UTC m=+0.022218077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:15:19 compute-0 gallant_satoshi[123211]: 167 167
Nov 29 05:15:19 compute-0 podman[123194]: 2025-11-29 05:15:19.358302022 +0000 UTC m=+0.117094577 container attach 4c99c468356a9280957c050f9e4030264056b982fe563f4e0e8e4322abb6589b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:15:19 compute-0 systemd[1]: libpod-4c99c468356a9280957c050f9e4030264056b982fe563f4e0e8e4322abb6589b.scope: Deactivated successfully.
Nov 29 05:15:19 compute-0 podman[123194]: 2025-11-29 05:15:19.35945802 +0000 UTC m=+0.118250575 container died 4c99c468356a9280957c050f9e4030264056b982fe563f4e0e8e4322abb6589b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-25332047d4ec6c1117ccb3a61879570063733b5fd5fb98f8c8720720396e8ed4-merged.mount: Deactivated successfully.
Nov 29 05:15:19 compute-0 podman[123194]: 2025-11-29 05:15:19.395708834 +0000 UTC m=+0.154501389 container remove 4c99c468356a9280957c050f9e4030264056b982fe563f4e0e8e4322abb6589b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:15:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:19 compute-0 systemd[1]: libpod-conmon-4c99c468356a9280957c050f9e4030264056b982fe563f4e0e8e4322abb6589b.scope: Deactivated successfully.
Nov 29 05:15:19 compute-0 podman[123234]: 2025-11-29 05:15:19.561474354 +0000 UTC m=+0.060987053 container create 9266786a60c3c989525a8f4a16c47169c17e5c2dea00a3020f379d1d85f1bf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:15:19 compute-0 systemd[1]: Started libpod-conmon-9266786a60c3c989525a8f4a16c47169c17e5c2dea00a3020f379d1d85f1bf81.scope.
Nov 29 05:15:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272d1debef35349fadae6a29df653137203b0862f337f3e178c9138eba8281f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272d1debef35349fadae6a29df653137203b0862f337f3e178c9138eba8281f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272d1debef35349fadae6a29df653137203b0862f337f3e178c9138eba8281f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272d1debef35349fadae6a29df653137203b0862f337f3e178c9138eba8281f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:15:19 compute-0 podman[123234]: 2025-11-29 05:15:19.536722366 +0000 UTC m=+0.036235085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:15:19 compute-0 podman[123234]: 2025-11-29 05:15:19.641110726 +0000 UTC m=+0.140623485 container init 9266786a60c3c989525a8f4a16c47169c17e5c2dea00a3020f379d1d85f1bf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:15:19 compute-0 podman[123234]: 2025-11-29 05:15:19.648392531 +0000 UTC m=+0.147905210 container start 9266786a60c3c989525a8f4a16c47169c17e5c2dea00a3020f379d1d85f1bf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:15:19 compute-0 podman[123234]: 2025-11-29 05:15:19.652438809 +0000 UTC m=+0.151951498 container attach 9266786a60c3c989525a8f4a16c47169c17e5c2dea00a3020f379d1d85f1bf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nobel, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 05:15:19 compute-0 ceph-mon[75176]: 9.1c scrub starts
Nov 29 05:15:19 compute-0 ceph-mon[75176]: 9.1c scrub ok
Nov 29 05:15:19 compute-0 ceph-mon[75176]: 6.e scrub starts
Nov 29 05:15:19 compute-0 ceph-mon[75176]: 6.e scrub ok
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]: {
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "osd_id": 0,
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "type": "bluestore"
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:     },
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "osd_id": 1,
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "type": "bluestore"
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:     },
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "osd_id": 2,
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:         "type": "bluestore"
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]:     }
Nov 29 05:15:20 compute-0 peaceful_nobel[123251]: }
Nov 29 05:15:20 compute-0 systemd[1]: libpod-9266786a60c3c989525a8f4a16c47169c17e5c2dea00a3020f379d1d85f1bf81.scope: Deactivated successfully.
Nov 29 05:15:20 compute-0 systemd[1]: libpod-9266786a60c3c989525a8f4a16c47169c17e5c2dea00a3020f379d1d85f1bf81.scope: Consumed 1.090s CPU time.
Nov 29 05:15:20 compute-0 conmon[123251]: conmon 9266786a60c3c989525a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9266786a60c3c989525a8f4a16c47169c17e5c2dea00a3020f379d1d85f1bf81.scope/container/memory.events
Nov 29 05:15:20 compute-0 podman[123234]: 2025-11-29 05:15:20.730838341 +0000 UTC m=+1.230351040 container died 9266786a60c3c989525a8f4a16c47169c17e5c2dea00a3020f379d1d85f1bf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nobel, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-272d1debef35349fadae6a29df653137203b0862f337f3e178c9138eba8281f4-merged.mount: Deactivated successfully.
Nov 29 05:15:20 compute-0 podman[123234]: 2025-11-29 05:15:20.795187504 +0000 UTC m=+1.294700173 container remove 9266786a60c3c989525a8f4a16c47169c17e5c2dea00a3020f379d1d85f1bf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 05:15:20 compute-0 systemd[1]: libpod-conmon-9266786a60c3c989525a8f4a16c47169c17e5c2dea00a3020f379d1d85f1bf81.scope: Deactivated successfully.
Nov 29 05:15:20 compute-0 sudo[123129]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:15:20 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:15:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:15:20 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:15:20 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 5709d045-2932-4407-8826-0b1b6124d192 does not exist
Nov 29 05:15:20 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 5c1b3c3c-eeb8-4171-926f-f99f0204f8a1 does not exist
Nov 29 05:15:20 compute-0 sudo[123298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:15:20 compute-0 ceph-mon[75176]: pgmap v310: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:20 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:15:20 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:15:20 compute-0 sudo[123298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:20 compute-0 sudo[123298]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:21 compute-0 sudo[123323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:15:21 compute-0 sudo[123323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:15:21 compute-0 sudo[123323]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:21 compute-0 sshd-session[123348]: Accepted publickey for zuul from 192.168.122.30 port 33814 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:15:21 compute-0 systemd-logind[793]: New session 39 of user zuul.
Nov 29 05:15:21 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 29 05:15:21 compute-0 sshd-session[123348]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:15:22 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Nov 29 05:15:22 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Nov 29 05:15:22 compute-0 ceph-mon[75176]: pgmap v311: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:22 compute-0 ceph-mon[75176]: 9.1e deep-scrub starts
Nov 29 05:15:22 compute-0 python3.9[123501]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:15:23 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 29 05:15:23 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 29 05:15:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:23 compute-0 ceph-mon[75176]: 9.1e deep-scrub ok
Nov 29 05:15:23 compute-0 sudo[123655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbxsrksgygffmhxlhuvutatjqjsaytaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393323.453261-33-199671064526386/AnsiballZ_file.py'
Nov 29 05:15:23 compute-0 sudo[123655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:24 compute-0 python3.9[123657]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:24 compute-0 sudo[123655]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:24 compute-0 ceph-mon[75176]: 9.7 scrub starts
Nov 29 05:15:24 compute-0 ceph-mon[75176]: 9.7 scrub ok
Nov 29 05:15:24 compute-0 ceph-mon[75176]: pgmap v312: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:24 compute-0 sudo[123830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juilrsmdypokiipkkzkmjmroftggeafk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393324.4464028-41-5808984730132/AnsiballZ_stat.py'
Nov 29 05:15:24 compute-0 sudo[123830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:25 compute-0 python3.9[123832]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:25 compute-0 sudo[123830]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:25 compute-0 sudo[123908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwnnqiwywgeyaegybpqrmmebwgllzhwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393324.4464028-41-5808984730132/AnsiballZ_file.py'
Nov 29 05:15:25 compute-0 sudo[123908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:25 compute-0 python3.9[123910]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.arx7ng14 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:25 compute-0 sudo[123908]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:26 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 29 05:15:26 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 29 05:15:26 compute-0 sudo[124060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezkwjbzpsmrlfvdenrhvkrybsvmbfcbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393326.2018259-61-70171291307506/AnsiballZ_stat.py'
Nov 29 05:15:26 compute-0 sudo[124060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:26 compute-0 python3.9[124062]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:26 compute-0 sudo[124060]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:26 compute-0 ceph-mon[75176]: pgmap v313: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:27 compute-0 sudo[124138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvjnmsfdnclwvbwuoojnasdqrumwceyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393326.2018259-61-70171291307506/AnsiballZ_file.py'
Nov 29 05:15:27 compute-0 sudo[124138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:27 compute-0 python3.9[124140]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.gsx541ln recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:27 compute-0 sudo[124138]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:27 compute-0 sudo[124290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwtisoufykggfvdyvttcwdsdhgxxrepy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393327.496572-74-109783521328322/AnsiballZ_file.py'
Nov 29 05:15:27 compute-0 sudo[124290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:27 compute-0 ceph-mon[75176]: 9.17 scrub starts
Nov 29 05:15:27 compute-0 ceph-mon[75176]: 9.17 scrub ok
Nov 29 05:15:28 compute-0 python3.9[124292]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:15:28 compute-0 sudo[124290]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:28 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 29 05:15:28 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 29 05:15:28 compute-0 sudo[124442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltizbqkuwaceuosdjbfxvhadhosdrxth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393328.3247204-82-29486898042039/AnsiballZ_stat.py'
Nov 29 05:15:28 compute-0 sudo[124442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:28 compute-0 python3.9[124444]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:28 compute-0 sudo[124442]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:28 compute-0 ceph-mon[75176]: pgmap v314: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:29 compute-0 sudo[124520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asjsjiuryyypkzfkttcyiyhhjgxnzctu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393328.3247204-82-29486898042039/AnsiballZ_file.py'
Nov 29 05:15:29 compute-0 sudo[124520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:29 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 29 05:15:29 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 29 05:15:29 compute-0 python3.9[124522]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:15:29 compute-0 sudo[124520]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:29 compute-0 ceph-mon[75176]: 9.f scrub starts
Nov 29 05:15:29 compute-0 ceph-mon[75176]: 9.f scrub ok
Nov 29 05:15:29 compute-0 ceph-mon[75176]: 6.2 deep-scrub starts
Nov 29 05:15:29 compute-0 ceph-mon[75176]: 6.2 deep-scrub ok
Nov 29 05:15:30 compute-0 sudo[124672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oamvlzjuisjhldvjmhaxyovqwuzavedn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393329.730428-82-70473411170248/AnsiballZ_stat.py'
Nov 29 05:15:30 compute-0 sudo[124672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:30 compute-0 python3.9[124674]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:30 compute-0 sudo[124672]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:30 compute-0 sudo[124750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zllytnzhdmsqnhvfsioryvbxgwtzsgiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393329.730428-82-70473411170248/AnsiballZ_file.py'
Nov 29 05:15:30 compute-0 sudo[124750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:30 compute-0 python3.9[124752]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:15:30 compute-0 sudo[124750]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:30 compute-0 ceph-mon[75176]: pgmap v315: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:31 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 29 05:15:31 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 29 05:15:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:31 compute-0 sudo[124902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsofvwywtlrshvasweejrlxwwnksgauu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393331.0777755-105-111263032640645/AnsiballZ_file.py'
Nov 29 05:15:31 compute-0 sudo[124902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:31 compute-0 python3.9[124904]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:31 compute-0 sudo[124902]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:31 compute-0 ceph-mon[75176]: 9.8 scrub starts
Nov 29 05:15:31 compute-0 ceph-mon[75176]: 9.8 scrub ok
Nov 29 05:15:32 compute-0 sudo[125054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mibvjidugvxfqslrftunccjzvciqglgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393331.9491105-113-62984632361313/AnsiballZ_stat.py'
Nov 29 05:15:32 compute-0 sudo[125054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:32 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Nov 29 05:15:32 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Nov 29 05:15:32 compute-0 python3.9[125056]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:32 compute-0 sudo[125054]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:32 compute-0 sudo[125132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grsovcmorrjjvoewiseqobtosqjjclbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393331.9491105-113-62984632361313/AnsiballZ_file.py'
Nov 29 05:15:32 compute-0 sudo[125132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:32 compute-0 ceph-mon[75176]: pgmap v316: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:32 compute-0 ceph-mon[75176]: 6.c deep-scrub starts
Nov 29 05:15:32 compute-0 ceph-mon[75176]: 6.c deep-scrub ok
Nov 29 05:15:33 compute-0 python3.9[125134]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:33 compute-0 sudo[125132]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:33 compute-0 sudo[125284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgfffgaxhjoisxtvmloqsiywmffwrxkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393333.3554406-125-267693412681016/AnsiballZ_stat.py'
Nov 29 05:15:33 compute-0 sudo[125284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:34 compute-0 python3.9[125286]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:34 compute-0 sudo[125284]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:34 compute-0 sudo[125362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pikegsryjvzwyevakeigqyayfchamwml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393333.3554406-125-267693412681016/AnsiballZ_file.py'
Nov 29 05:15:34 compute-0 sudo[125362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:34 compute-0 python3.9[125364]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:34 compute-0 sudo[125362]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:35 compute-0 ceph-mon[75176]: pgmap v317: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:35 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Nov 29 05:15:35 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 29 05:15:35 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Nov 29 05:15:35 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 29 05:15:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:35 compute-0 sudo[125514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwhbbaltkkeqtzfsobwidyueklmbecwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393334.8441026-137-69599903480227/AnsiballZ_systemd.py'
Nov 29 05:15:35 compute-0 sudo[125514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:35 compute-0 python3.9[125516]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:15:35 compute-0 systemd[1]: Reloading.
Nov 29 05:15:36 compute-0 systemd-sysv-generator[125548]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:15:36 compute-0 systemd-rc-local-generator[125543]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:15:36 compute-0 ceph-mon[75176]: 6.4 deep-scrub starts
Nov 29 05:15:36 compute-0 ceph-mon[75176]: 9.18 scrub starts
Nov 29 05:15:36 compute-0 ceph-mon[75176]: 6.4 deep-scrub ok
Nov 29 05:15:36 compute-0 ceph-mon[75176]: 9.18 scrub ok
Nov 29 05:15:36 compute-0 sudo[125514]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:36 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 29 05:15:36 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 29 05:15:36 compute-0 sudo[125704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqaufvtwbxzckbcsawphdjisipotxxgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393336.5076017-145-116918053849046/AnsiballZ_stat.py'
Nov 29 05:15:36 compute-0 sudo[125704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:37 compute-0 ceph-mon[75176]: pgmap v318: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:37 compute-0 ceph-mon[75176]: 6.b scrub starts
Nov 29 05:15:37 compute-0 ceph-mon[75176]: 6.b scrub ok
Nov 29 05:15:37 compute-0 python3.9[125706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:37 compute-0 sudo[125704]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:37 compute-0 sudo[125782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkdwbdvawnqvmlaulqqtfoxrssdtchoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393336.5076017-145-116918053849046/AnsiballZ_file.py'
Nov 29 05:15:37 compute-0 sudo[125782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:37 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 29 05:15:37 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 29 05:15:37 compute-0 python3.9[125784]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:37 compute-0 sudo[125782]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:38 compute-0 ceph-mon[75176]: 6.d scrub starts
Nov 29 05:15:38 compute-0 ceph-mon[75176]: 6.d scrub ok
Nov 29 05:15:38 compute-0 sudo[125934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbvwzsifspvwxwhvcpaejqocfklerwie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393337.8562968-157-78903144708123/AnsiballZ_stat.py'
Nov 29 05:15:38 compute-0 sudo[125934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:38 compute-0 python3.9[125936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:38 compute-0 sudo[125934]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:38 compute-0 sudo[126012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppuwlnenvterwpildruduchwkkajbysp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393337.8562968-157-78903144708123/AnsiballZ_file.py'
Nov 29 05:15:38 compute-0 sudo[126012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:38 compute-0 python3.9[126014]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:38 compute-0 sudo[126012]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:39 compute-0 ceph-mon[75176]: pgmap v319: 305 pgs: 305 active+clean; 455 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:39 compute-0 sudo[126164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxbfqpeknvujmjnymrjhnnheskblkbau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393339.1080506-169-259851958832323/AnsiballZ_systemd.py'
Nov 29 05:15:39 compute-0 sudo[126164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:39 compute-0 python3.9[126166]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:15:39 compute-0 systemd[1]: Reloading.
Nov 29 05:15:39 compute-0 systemd-rc-local-generator[126195]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:15:39 compute-0 systemd-sysv-generator[126199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:15:40 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 05:15:40 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 05:15:40 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 05:15:40 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 05:15:40 compute-0 sudo[126164]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:41 compute-0 python3.9[126359]: ansible-ansible.builtin.service_facts Invoked
Nov 29 05:15:41 compute-0 ceph-mon[75176]: pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:41 compute-0 network[126376]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 05:15:41 compute-0 network[126377]: 'network-scripts' will be removed from distribution in near future.
Nov 29 05:15:41 compute-0 network[126378]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 05:15:41 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 29 05:15:41 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:15:41
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.mgr', 'vms', 'backups', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data']
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:15:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:41 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Nov 29 05:15:41 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Nov 29 05:15:42 compute-0 ceph-mon[75176]: 9.c scrub starts
Nov 29 05:15:42 compute-0 ceph-mon[75176]: 9.c scrub ok
Nov 29 05:15:42 compute-0 ceph-mon[75176]: 9.15 deep-scrub starts
Nov 29 05:15:42 compute-0 ceph-mon[75176]: 9.15 deep-scrub ok
Nov 29 05:15:42 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 29 05:15:42 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 29 05:15:43 compute-0 ceph-mon[75176]: pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:43 compute-0 ceph-mon[75176]: 6.f scrub starts
Nov 29 05:15:43 compute-0 ceph-mon[75176]: 6.f scrub ok
Nov 29 05:15:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:44 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Nov 29 05:15:44 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Nov 29 05:15:45 compute-0 ceph-mon[75176]: pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:45 compute-0 ceph-mon[75176]: 9.1f deep-scrub starts
Nov 29 05:15:45 compute-0 ceph-mon[75176]: 9.1f deep-scrub ok
Nov 29 05:15:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:46 compute-0 sudo[126638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylieixpxvqajvbbxsjdicfkwfeccqsut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393345.6941123-195-76904558947328/AnsiballZ_stat.py'
Nov 29 05:15:46 compute-0 sudo[126638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:46 compute-0 python3.9[126640]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:46 compute-0 sudo[126638]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:46 compute-0 sudo[126716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsondarmujdamngmekstrlortgifaznf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393345.6941123-195-76904558947328/AnsiballZ_file.py'
Nov 29 05:15:46 compute-0 sudo[126716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:46 compute-0 python3.9[126718]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:46 compute-0 sudo[126716]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:47 compute-0 ceph-mon[75176]: pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:47 compute-0 sudo[126868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndftlnmxsavzkxmwwrqhqxmeoejsffbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393347.005078-208-77447176872326/AnsiballZ_file.py'
Nov 29 05:15:47 compute-0 sudo[126868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:47 compute-0 python3.9[126870]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:47 compute-0 sudo[126868]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:48 compute-0 sudo[127020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozolezmtekkgvzojetkzuugaxllzxynz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393347.662449-216-228771641936206/AnsiballZ_stat.py'
Nov 29 05:15:48 compute-0 sudo[127020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:48 compute-0 python3.9[127022]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:48 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 29 05:15:48 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 29 05:15:48 compute-0 sudo[127020]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:48 compute-0 sudo[127098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmmxrbokyopnrakkltegqbvlhrcokdhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393347.662449-216-228771641936206/AnsiballZ_file.py'
Nov 29 05:15:48 compute-0 sudo[127098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:48 compute-0 python3.9[127100]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:48 compute-0 sudo[127098]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:49 compute-0 ceph-mon[75176]: pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:49 compute-0 ceph-mon[75176]: 9.13 scrub starts
Nov 29 05:15:49 compute-0 ceph-mon[75176]: 9.13 scrub ok
Nov 29 05:15:49 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 29 05:15:49 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 29 05:15:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:49 compute-0 sudo[127250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ownwyhjkuyjekcdkebbkgsaysmzjhzsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393349.0458546-231-270629123469206/AnsiballZ_timezone.py'
Nov 29 05:15:49 compute-0 sudo[127250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:49 compute-0 python3.9[127252]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 05:15:49 compute-0 systemd[1]: Starting Time & Date Service...
Nov 29 05:15:49 compute-0 systemd[1]: Started Time & Date Service.
Nov 29 05:15:49 compute-0 sudo[127250]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:50 compute-0 ceph-mon[75176]: 9.19 scrub starts
Nov 29 05:15:50 compute-0 ceph-mon[75176]: 9.19 scrub ok
Nov 29 05:15:50 compute-0 sudo[127406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcpuyibyzrvygxchlxrhohbukufjtnuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393350.207787-240-73960116246373/AnsiballZ_file.py'
Nov 29 05:15:50 compute-0 sudo[127406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:50 compute-0 python3.9[127408]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:50 compute-0 sudo[127406]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:15:51 compute-0 ceph-mon[75176]: pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:51 compute-0 sudo[127558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utgizsazehwhogkppnmybtvvjdabkbsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393350.9511125-248-150218956095578/AnsiballZ_stat.py'
Nov 29 05:15:51 compute-0 sudo[127558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:51 compute-0 python3.9[127560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:51 compute-0 sudo[127558]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:51 compute-0 sudo[127637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksehopxpwrwmaghsatnbqnhympdyejos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393350.9511125-248-150218956095578/AnsiballZ_file.py'
Nov 29 05:15:51 compute-0 sudo[127637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:52 compute-0 python3.9[127639]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:52 compute-0 sudo[127637]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:52 compute-0 sudo[127789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wynkqynoiirioxgorjfoowjzthjhsldd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393352.2405155-260-59502196256593/AnsiballZ_stat.py'
Nov 29 05:15:52 compute-0 sudo[127789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:52 compute-0 python3.9[127791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:52 compute-0 sudo[127789]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:53 compute-0 sudo[127867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtowtqtsetbszxzsbvxjeeqmzfdeowhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393352.2405155-260-59502196256593/AnsiballZ_file.py'
Nov 29 05:15:53 compute-0 sudo[127867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:53 compute-0 ceph-mon[75176]: pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:53 compute-0 python3.9[127869]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.i2o0638i recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:53 compute-0 sudo[127867]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:53 compute-0 sudo[128019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rivldprwzdexkhoyyqvyprhhkhwtofpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393353.4796267-272-13377796395883/AnsiballZ_stat.py'
Nov 29 05:15:53 compute-0 sudo[128019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:54 compute-0 python3.9[128021]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:54 compute-0 sudo[128019]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:54 compute-0 ceph-mon[75176]: pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:54 compute-0 sudo[128097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbglptqdajqaluutriontvbvghxwraef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393353.4796267-272-13377796395883/AnsiballZ_file.py'
Nov 29 05:15:54 compute-0 sudo[128097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:54 compute-0 python3.9[128099]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:54 compute-0 sudo[128097]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:56 compute-0 sudo[128249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qywcgelscweftfolxhsvqdyqnyalvdjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393354.9400125-285-54664352917267/AnsiballZ_command.py'
Nov 29 05:15:56 compute-0 sudo[128249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:56 compute-0 python3.9[128251]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:15:56 compute-0 ceph-mon[75176]: pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:56 compute-0 sudo[128249]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:56 compute-0 rsyslogd[1003]: imjournal: 1389 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 05:15:57 compute-0 sudo[128402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqxqtfprvayscmkziqfegsmeupisqjhc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764393356.6637154-293-70687015272857/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 05:15:57 compute-0 sudo[128402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:57 compute-0 python3[128404]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 05:15:57 compute-0 sudo[128402]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:57 compute-0 sudo[128554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duaanynqlnojoqwguqvlrzjqqeylfpsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393357.622019-301-138505948463944/AnsiballZ_stat.py'
Nov 29 05:15:57 compute-0 sudo[128554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:58 compute-0 python3.9[128556]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:58 compute-0 sudo[128554]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:58 compute-0 sudo[128632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udhngpvmntdbwaollpcasmegoppoqgrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393357.622019-301-138505948463944/AnsiballZ_file.py'
Nov 29 05:15:58 compute-0 sudo[128632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:58 compute-0 ceph-mon[75176]: pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:58 compute-0 python3.9[128634]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:58 compute-0 sudo[128632]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:59 compute-0 sudo[128784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gneimjnumnwyncdarhtflftoomkgevhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393358.869178-313-2384038785050/AnsiballZ_stat.py'
Nov 29 05:15:59 compute-0 sudo[128784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:15:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:15:59 compute-0 python3.9[128786]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:15:59 compute-0 sudo[128784]: pam_unix(sudo:session): session closed for user root
Nov 29 05:15:59 compute-0 sudo[128862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbfjktfygacieukaltpaoxbigruifhaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393358.869178-313-2384038785050/AnsiballZ_file.py'
Nov 29 05:15:59 compute-0 sudo[128862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:15:59 compute-0 python3.9[128864]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:15:59 compute-0 sudo[128862]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:00 compute-0 sudo[129014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aatxouyaekmfptbtskkuflfxfusmywba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393360.0955422-325-217350456674956/AnsiballZ_stat.py'
Nov 29 05:16:00 compute-0 sudo[129014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:00 compute-0 ceph-mon[75176]: pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:00 compute-0 python3.9[129016]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:16:00 compute-0 sudo[129014]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:00 compute-0 sudo[129092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbwkgjlpwtdimxilmhieghzwzdykpbok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393360.0955422-325-217350456674956/AnsiballZ_file.py'
Nov 29 05:16:00 compute-0 sudo[129092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:01 compute-0 python3.9[129094]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:16:01 compute-0 sudo[129092]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:01 compute-0 sudo[129244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyxyvojzlsicmqymmjwzvroewywqtitf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393361.2895405-337-176498722384290/AnsiballZ_stat.py'
Nov 29 05:16:01 compute-0 sudo[129244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:01 compute-0 python3.9[129246]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:16:01 compute-0 sudo[129244]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:02 compute-0 sudo[129322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofdilouhmjhawpcrcacqqiyxorezuvks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393361.2895405-337-176498722384290/AnsiballZ_file.py'
Nov 29 05:16:02 compute-0 sudo[129322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:02 compute-0 python3.9[129324]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:16:02 compute-0 sudo[129322]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:02 compute-0 ceph-mon[75176]: pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:03 compute-0 sudo[129474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkrhhcvzfkaykeqlmalxvabxexjjwniu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393362.7093868-349-64507869430304/AnsiballZ_stat.py'
Nov 29 05:16:03 compute-0 sudo[129474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:03 compute-0 python3.9[129476]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:16:03 compute-0 sudo[129474]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:03 compute-0 sudo[129552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boeawgjmmflvhkqfasjbmnknddjhuxlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393362.7093868-349-64507869430304/AnsiballZ_file.py'
Nov 29 05:16:03 compute-0 sudo[129552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:03 compute-0 python3.9[129554]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:16:03 compute-0 sudo[129552]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:04 compute-0 sudo[129704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayyorygkzhqzsetyfykvdwhsbfnduxst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393364.180248-362-109345401732175/AnsiballZ_command.py'
Nov 29 05:16:04 compute-0 sudo[129704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:04 compute-0 ceph-mon[75176]: pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:04 compute-0 python3.9[129706]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:16:04 compute-0 sudo[129704]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:05 compute-0 sudo[129859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yntpusfekizndtkamnxyloxhgogeesxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393364.9087727-370-89986402349231/AnsiballZ_blockinfile.py'
Nov 29 05:16:05 compute-0 sudo[129859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:05 compute-0 python3.9[129861]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:16:05 compute-0 sudo[129859]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:06 compute-0 sudo[130011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzlvrygfvhdzgdehzjhiqjxubinudlyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393365.8865526-379-244252808097198/AnsiballZ_file.py'
Nov 29 05:16:06 compute-0 sudo[130011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:06 compute-0 python3.9[130013]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:16:06 compute-0 sudo[130011]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:06 compute-0 ceph-mon[75176]: pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:06 compute-0 sudo[130163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnbabgqbxindmomihepxihytjoefydch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393366.6132329-379-30204322964160/AnsiballZ_file.py'
Nov 29 05:16:06 compute-0 sudo[130163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:07 compute-0 python3.9[130165]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:16:07 compute-0 sudo[130163]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:07 compute-0 sudo[130315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbfxirpondmercjyfchgdqgqswolofxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393367.398523-394-50987680947825/AnsiballZ_mount.py'
Nov 29 05:16:07 compute-0 sudo[130315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:08 compute-0 python3.9[130317]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 05:16:08 compute-0 sudo[130315]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:08 compute-0 ceph-mon[75176]: pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:08 compute-0 sudo[130467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyvqdqzdnkadmmnwemgrjyixqrosjdwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393368.4545257-394-178920000098272/AnsiballZ_mount.py'
Nov 29 05:16:08 compute-0 sudo[130467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:09 compute-0 python3.9[130469]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 05:16:09 compute-0 sudo[130467]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:09 compute-0 sshd-session[123351]: Connection closed by 192.168.122.30 port 33814
Nov 29 05:16:09 compute-0 sshd-session[123348]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:16:09 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 05:16:09 compute-0 systemd[1]: session-39.scope: Consumed 32.968s CPU time.
Nov 29 05:16:09 compute-0 systemd-logind[793]: Session 39 logged out. Waiting for processes to exit.
Nov 29 05:16:09 compute-0 systemd-logind[793]: Removed session 39.
Nov 29 05:16:10 compute-0 ceph-mon[75176]: pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:16:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:16:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:16:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:16:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:16:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:16:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:12 compute-0 ceph-mon[75176]: pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:14 compute-0 ceph-mon[75176]: pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:15 compute-0 sshd-session[130494]: Accepted publickey for zuul from 192.168.122.30 port 40322 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:16:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:15 compute-0 systemd-logind[793]: New session 40 of user zuul.
Nov 29 05:16:15 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 29 05:16:15 compute-0 sshd-session[130494]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:16:16 compute-0 sudo[130647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-barbzoihwlvfbzgtcaiksewnvmlzjzdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393375.5723376-16-1869444248281/AnsiballZ_tempfile.py'
Nov 29 05:16:16 compute-0 sudo[130647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:16 compute-0 python3.9[130649]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 05:16:16 compute-0 sudo[130647]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:16 compute-0 ceph-mon[75176]: pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:17 compute-0 sudo[130799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcjkurrzszozctjogpddqkknptoxngok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393376.6548479-28-188764614546887/AnsiballZ_stat.py'
Nov 29 05:16:17 compute-0 sudo[130799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:17 compute-0 python3.9[130801]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:16:17 compute-0 sudo[130799]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:18 compute-0 sudo[130953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arzpwzzafqatvblruluzcprhdfidhded ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393377.6933012-36-211352934613968/AnsiballZ_slurp.py'
Nov 29 05:16:18 compute-0 sudo[130953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:18 compute-0 python3.9[130955]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 05:16:18 compute-0 sudo[130953]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:18 compute-0 ceph-mon[75176]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:19 compute-0 sudo[131105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvggaantvsljipyssvsniszyrmlbyrdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393378.6024542-44-275381334213989/AnsiballZ_stat.py'
Nov 29 05:16:19 compute-0 sudo[131105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:19 compute-0 python3.9[131107]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.gvo9zkfa follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:16:19 compute-0 sudo[131105]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:19 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 05:16:20 compute-0 sudo[131232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phevbwkienakckxwdfzrfniqwcynzqqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393378.6024542-44-275381334213989/AnsiballZ_copy.py'
Nov 29 05:16:20 compute-0 sudo[131232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:20 compute-0 python3.9[131234]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.gvo9zkfa mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393378.6024542-44-275381334213989/.source.gvo9zkfa _original_basename=.ovo4rsvg follow=False checksum=1b0e63c11fa90fba31690abb7f0e5ecfc577d3bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:16:20 compute-0 sudo[131232]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:20 compute-0 ceph-mon[75176]: pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:21 compute-0 sudo[131356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:16:21 compute-0 sudo[131356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:21 compute-0 sudo[131356]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:21 compute-0 sudo[131415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmiqizzmlbievydehomjaqzetzputhtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393380.4268818-59-21260230877088/AnsiballZ_setup.py'
Nov 29 05:16:21 compute-0 sudo[131415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:21 compute-0 sudo[131405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:16:21 compute-0 sudo[131405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:21 compute-0 sudo[131405]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:21 compute-0 sudo[131437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:16:21 compute-0 sudo[131437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:21 compute-0 sudo[131437]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:21 compute-0 sudo[131462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:16:21 compute-0 sudo[131462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:21 compute-0 python3.9[131434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:16:21 compute-0 sudo[131415]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:21 compute-0 sudo[131462]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:16:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:16:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:16:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:16:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:16:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:16:21 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev b783ab87-bc32-4f82-be81-925589042a46 does not exist
Nov 29 05:16:21 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 0fa7cbe2-8bd6-493d-b2c8-6896ddbb8140 does not exist
Nov 29 05:16:21 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 768e7e1d-41d1-45a6-ac6d-acddb9181dd0 does not exist
Nov 29 05:16:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:16:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:16:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:16:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:16:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:16:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:16:21 compute-0 sudo[131596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:16:21 compute-0 sudo[131596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:21 compute-0 sudo[131596]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:22 compute-0 sudo[131621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:16:22 compute-0 sudo[131621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:22 compute-0 sudo[131621]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:22 compute-0 sudo[131647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:16:22 compute-0 sudo[131647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:22 compute-0 sudo[131647]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:22 compute-0 sudo[131694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:16:22 compute-0 sudo[131694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:22 compute-0 sudo[131769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsrqyklacuxvvdypvhiotvgwodhljfku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393381.7221735-68-202587678523694/AnsiballZ_blockinfile.py'
Nov 29 05:16:22 compute-0 sudo[131769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:22 compute-0 python3.9[131771]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMckHMduWmwA/jneofKzqltVrdb/vEVNoPwADfQfHjxo2ViAjKtzRJxQm+bTvpTXgt3d3GaLwohXhYMtcnWss0rEYtIGMLiXWJAB76Vi4azFd32Hy0mDTGhpqL5tz3X/QJFmASZVWlpRz77RZoFzhuMtQpF581gmKi8QLN3n4kyPvi8IBRjIvdbSyN1hkk5nbYZFrdOhA0K7FLalaYs9fIyoD0rH+dijNp/mY8EbyOAWiPIFfzMZWqy9OkXlUKH6233dlpLGCHfD1uwqM55rv7g+qtOrKiOnqkc5b24MfjM3Dq8B/kIR3GisItM2fI/avStY0whFRyYPTqysal5H+pXy5+QCOGwsWv0POhypuwSVSbtY3NcfizytHcPT2Au6g3Xx/Gazoxx4fVkVLTjtzhz8URfMzAclsZVcUxtFyZlGHtoXumLkWdYeLYQA4dqkQVL7KwOEQp31HXuBfsc98k/UoOj9+SAEbQrLsEBhRXTSsD2bL350GMA7poDjiSC1k=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAwQmzwqCS97U8wjy82krUlVUeH2sOvejp9p1btw+sbe
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHbvzG6Snia8dc8X++wUykISUD7zTpLyaTM0CVExLn67fyxHoL2pCwIcx6cP7HnIRC6S3Et2Ooooe+xc0kenKn0=
                                              create=True mode=0644 path=/tmp/ansible.gvo9zkfa state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:16:22 compute-0 sudo[131769]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:22 compute-0 podman[131811]: 2025-11-29 05:16:22.581002974 +0000 UTC m=+0.056826971 container create f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:16:22 compute-0 ceph-mon[75176]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:22 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:16:22 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:16:22 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:16:22 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:16:22 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:16:22 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:16:22 compute-0 systemd[1]: Started libpod-conmon-f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882.scope.
Nov 29 05:16:22 compute-0 podman[131811]: 2025-11-29 05:16:22.560000802 +0000 UTC m=+0.035824899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:16:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:16:22 compute-0 podman[131811]: 2025-11-29 05:16:22.693900518 +0000 UTC m=+0.169724625 container init f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 05:16:22 compute-0 podman[131811]: 2025-11-29 05:16:22.70569162 +0000 UTC m=+0.181515627 container start f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:16:22 compute-0 podman[131811]: 2025-11-29 05:16:22.709506642 +0000 UTC m=+0.185330739 container attach f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:16:22 compute-0 infallible_cannon[131851]: 167 167
Nov 29 05:16:22 compute-0 systemd[1]: libpod-f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882.scope: Deactivated successfully.
Nov 29 05:16:22 compute-0 conmon[131851]: conmon f8c6bebe7f7347303974 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882.scope/container/memory.events
Nov 29 05:16:22 compute-0 podman[131868]: 2025-11-29 05:16:22.779023167 +0000 UTC m=+0.043193766 container died f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:16:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f97b1989deec770e30f5a1fafbb09b84b55d85639e46c740ab9a66ecd718a9b0-merged.mount: Deactivated successfully.
Nov 29 05:16:22 compute-0 podman[131868]: 2025-11-29 05:16:22.833685345 +0000 UTC m=+0.097855904 container remove f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:16:22 compute-0 systemd[1]: libpod-conmon-f8c6bebe7f7347303974e434c855983fd3a5f789e93eb4350de9635c1785b882.scope: Deactivated successfully.
Nov 29 05:16:23 compute-0 podman[131930]: 2025-11-29 05:16:23.055303952 +0000 UTC m=+0.068863440 container create dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:16:23 compute-0 systemd[1]: Started libpod-conmon-dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8.scope.
Nov 29 05:16:23 compute-0 podman[131930]: 2025-11-29 05:16:23.026370219 +0000 UTC m=+0.039929767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:16:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:16:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:23 compute-0 podman[131930]: 2025-11-29 05:16:23.155109151 +0000 UTC m=+0.168668639 container init dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 05:16:23 compute-0 podman[131930]: 2025-11-29 05:16:23.163976944 +0000 UTC m=+0.177536402 container start dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:16:23 compute-0 podman[131930]: 2025-11-29 05:16:23.171222008 +0000 UTC m=+0.184781556 container attach dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:16:23 compute-0 sudo[132025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bemnunnyqmpxhbzibighzuavclayewiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393382.7537332-76-227733097142140/AnsiballZ_command.py'
Nov 29 05:16:23 compute-0 sudo[132025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:23 compute-0 python3.9[132027]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.gvo9zkfa' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:16:23 compute-0 sudo[132025]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:24 compute-0 busy_dubinsky[131965]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:16:24 compute-0 busy_dubinsky[131965]: --> relative data size: 1.0
Nov 29 05:16:24 compute-0 busy_dubinsky[131965]: --> All data devices are unavailable
Nov 29 05:16:24 compute-0 sudo[132202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tonpbrtaqtdjgsidytuiiavacxojnxum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393383.7236583-84-150249779267319/AnsiballZ_file.py'
Nov 29 05:16:24 compute-0 sudo[132202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:24 compute-0 systemd[1]: libpod-dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8.scope: Deactivated successfully.
Nov 29 05:16:24 compute-0 podman[131930]: 2025-11-29 05:16:24.193062275 +0000 UTC m=+1.206621723 container died dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:16:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-106b60610875f2eaf19c3ce75668fcc06519aeb120f0b10d73d1162105c39523-merged.mount: Deactivated successfully.
Nov 29 05:16:24 compute-0 podman[131930]: 2025-11-29 05:16:24.245583952 +0000 UTC m=+1.259143390 container remove dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 05:16:24 compute-0 systemd[1]: libpod-conmon-dbd2116a95a689bc9b97a557c4ee6470825e1fde02a3d334d075ca31dba45fa8.scope: Deactivated successfully.
Nov 29 05:16:24 compute-0 sudo[131694]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:24 compute-0 sudo[132218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:16:24 compute-0 sudo[132218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:24 compute-0 sudo[132218]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:24 compute-0 python3.9[132205]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.gvo9zkfa state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:16:24 compute-0 sudo[132202]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:24 compute-0 sudo[132243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:16:24 compute-0 sudo[132243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:24 compute-0 sudo[132243]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:24 compute-0 sudo[132286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:16:24 compute-0 sudo[132286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:24 compute-0 sudo[132286]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:24 compute-0 sudo[132317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:16:24 compute-0 sudo[132317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:24 compute-0 ceph-mon[75176]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:24 compute-0 sshd-session[130497]: Connection closed by 192.168.122.30 port 40322
Nov 29 05:16:24 compute-0 sshd-session[130494]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:16:24 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 05:16:24 compute-0 systemd[1]: session-40.scope: Consumed 5.798s CPU time.
Nov 29 05:16:24 compute-0 systemd-logind[793]: Session 40 logged out. Waiting for processes to exit.
Nov 29 05:16:24 compute-0 systemd-logind[793]: Removed session 40.
Nov 29 05:16:24 compute-0 podman[132382]: 2025-11-29 05:16:24.967826826 +0000 UTC m=+0.063923202 container create 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:16:25 compute-0 systemd[1]: Started libpod-conmon-7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa.scope.
Nov 29 05:16:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:16:25 compute-0 podman[132382]: 2025-11-29 05:16:24.941777002 +0000 UTC m=+0.037873408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:16:25 compute-0 podman[132382]: 2025-11-29 05:16:25.054185255 +0000 UTC m=+0.150281691 container init 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 05:16:25 compute-0 podman[132382]: 2025-11-29 05:16:25.066604692 +0000 UTC m=+0.162701028 container start 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 05:16:25 compute-0 podman[132382]: 2025-11-29 05:16:25.069221124 +0000 UTC m=+0.165317490 container attach 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:16:25 compute-0 amazing_elion[132398]: 167 167
Nov 29 05:16:25 compute-0 systemd[1]: libpod-7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa.scope: Deactivated successfully.
Nov 29 05:16:25 compute-0 podman[132382]: 2025-11-29 05:16:25.07446823 +0000 UTC m=+0.170564566 container died 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 05:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bbc31f96143717c4f6680d11eaebe786b9ae25459dded49e8e988c678f4c58e-merged.mount: Deactivated successfully.
Nov 29 05:16:25 compute-0 podman[132382]: 2025-11-29 05:16:25.11410795 +0000 UTC m=+0.210204276 container remove 7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 05:16:25 compute-0 systemd[1]: libpod-conmon-7cc1b5a893be3b265fb237725000849082b474c11997cf7d4d9db0760d60efaa.scope: Deactivated successfully.
Nov 29 05:16:25 compute-0 podman[132422]: 2025-11-29 05:16:25.334164939 +0000 UTC m=+0.053732709 container create fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 05:16:25 compute-0 systemd[1]: Started libpod-conmon-fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738.scope.
Nov 29 05:16:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:16:25 compute-0 podman[132422]: 2025-11-29 05:16:25.311462854 +0000 UTC m=+0.031030684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b116ee48aa01a4c3e097edd82fc3d965fa307a025f89ab335275eec8a8dc4a88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b116ee48aa01a4c3e097edd82fc3d965fa307a025f89ab335275eec8a8dc4a88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b116ee48aa01a4c3e097edd82fc3d965fa307a025f89ab335275eec8a8dc4a88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b116ee48aa01a4c3e097edd82fc3d965fa307a025f89ab335275eec8a8dc4a88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:25 compute-0 podman[132422]: 2025-11-29 05:16:25.421682404 +0000 UTC m=+0.141250204 container init fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:16:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:25 compute-0 podman[132422]: 2025-11-29 05:16:25.435050634 +0000 UTC m=+0.154618414 container start fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:16:25 compute-0 podman[132422]: 2025-11-29 05:16:25.438513047 +0000 UTC m=+0.158080877 container attach fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:16:26 compute-0 quirky_brattain[132439]: {
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:     "0": [
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:         {
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "devices": [
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "/dev/loop3"
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             ],
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_name": "ceph_lv0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_size": "21470642176",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "name": "ceph_lv0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "tags": {
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.cluster_name": "ceph",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.crush_device_class": "",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.encrypted": "0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.osd_id": "0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.type": "block",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.vdo": "0"
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             },
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "type": "block",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "vg_name": "ceph_vg0"
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:         }
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:     ],
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:     "1": [
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:         {
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "devices": [
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "/dev/loop4"
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             ],
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_name": "ceph_lv1",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_size": "21470642176",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "name": "ceph_lv1",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "tags": {
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.cluster_name": "ceph",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.crush_device_class": "",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.encrypted": "0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.osd_id": "1",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.type": "block",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.vdo": "0"
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             },
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "type": "block",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "vg_name": "ceph_vg1"
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:         }
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:     ],
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:     "2": [
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:         {
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "devices": [
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "/dev/loop5"
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             ],
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_name": "ceph_lv2",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_size": "21470642176",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "name": "ceph_lv2",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "tags": {
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.cluster_name": "ceph",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.crush_device_class": "",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.encrypted": "0",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.osd_id": "2",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.type": "block",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:                 "ceph.vdo": "0"
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             },
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "type": "block",
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:             "vg_name": "ceph_vg2"
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:         }
Nov 29 05:16:26 compute-0 quirky_brattain[132439]:     ]
Nov 29 05:16:26 compute-0 quirky_brattain[132439]: }
Nov 29 05:16:26 compute-0 systemd[1]: libpod-fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738.scope: Deactivated successfully.
Nov 29 05:16:26 compute-0 podman[132422]: 2025-11-29 05:16:26.218589565 +0000 UTC m=+0.938157415 container died fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:16:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b116ee48aa01a4c3e097edd82fc3d965fa307a025f89ab335275eec8a8dc4a88-merged.mount: Deactivated successfully.
Nov 29 05:16:26 compute-0 podman[132422]: 2025-11-29 05:16:26.296145203 +0000 UTC m=+1.015713023 container remove fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 05:16:26 compute-0 systemd[1]: libpod-conmon-fb25742b111ef11d5f1b2bc9cb98a203767a8f5f8325b9cabec0f99c72f60738.scope: Deactivated successfully.
Nov 29 05:16:26 compute-0 sudo[132317]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:26 compute-0 sudo[132462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:16:26 compute-0 sudo[132462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:26 compute-0 sudo[132462]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:26 compute-0 sudo[132487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:16:26 compute-0 sudo[132487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:26 compute-0 sudo[132487]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:26 compute-0 sudo[132512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:16:26 compute-0 sudo[132512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:26 compute-0 sudo[132512]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:26 compute-0 ceph-mon[75176]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:26 compute-0 sudo[132537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:16:26 compute-0 sudo[132537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:26 compute-0 podman[132602]: 2025-11-29 05:16:26.928659118 +0000 UTC m=+0.051910044 container create bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:16:26 compute-0 systemd[1]: Started libpod-conmon-bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82.scope.
Nov 29 05:16:26 compute-0 podman[132602]: 2025-11-29 05:16:26.900571665 +0000 UTC m=+0.023822681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:16:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:16:27 compute-0 podman[132602]: 2025-11-29 05:16:27.020064147 +0000 UTC m=+0.143315083 container init bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 05:16:27 compute-0 podman[132602]: 2025-11-29 05:16:27.031703155 +0000 UTC m=+0.154954061 container start bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:16:27 compute-0 podman[132602]: 2025-11-29 05:16:27.036837009 +0000 UTC m=+0.160087955 container attach bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:16:27 compute-0 boring_mclaren[132618]: 167 167
Nov 29 05:16:27 compute-0 systemd[1]: libpod-bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82.scope: Deactivated successfully.
Nov 29 05:16:27 compute-0 podman[132602]: 2025-11-29 05:16:27.039539023 +0000 UTC m=+0.162789949 container died bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:16:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0d4fff86a6cf6d1e21242825d2083c2c7dc1dcca8e8ebe28838b3765e4db952-merged.mount: Deactivated successfully.
Nov 29 05:16:27 compute-0 podman[132602]: 2025-11-29 05:16:27.084304925 +0000 UTC m=+0.207555851 container remove bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:16:27 compute-0 systemd[1]: libpod-conmon-bed3901c3104ff6f717df225d364bc9957817a81d4205ded4ff70798163c8e82.scope: Deactivated successfully.
Nov 29 05:16:27 compute-0 podman[132644]: 2025-11-29 05:16:27.318724868 +0000 UTC m=+0.061177766 container create b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:16:27 compute-0 systemd[1]: Started libpod-conmon-b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c.scope.
Nov 29 05:16:27 compute-0 podman[132644]: 2025-11-29 05:16:27.291242099 +0000 UTC m=+0.033695057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:16:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad13e39c417c5673e46ab0071ec6e0a8c5d03ad7f3d5f9d2165b07cb7fff4ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad13e39c417c5673e46ab0071ec6e0a8c5d03ad7f3d5f9d2165b07cb7fff4ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad13e39c417c5673e46ab0071ec6e0a8c5d03ad7f3d5f9d2165b07cb7fff4ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad13e39c417c5673e46ab0071ec6e0a8c5d03ad7f3d5f9d2165b07cb7fff4ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:16:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:27 compute-0 podman[132644]: 2025-11-29 05:16:27.437576034 +0000 UTC m=+0.180028962 container init b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:16:27 compute-0 podman[132644]: 2025-11-29 05:16:27.448784232 +0000 UTC m=+0.191237160 container start b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:16:27 compute-0 podman[132644]: 2025-11-29 05:16:27.453470714 +0000 UTC m=+0.195923642 container attach b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]: {
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "osd_id": 0,
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "type": "bluestore"
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:     },
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "osd_id": 1,
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "type": "bluestore"
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:     },
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "osd_id": 2,
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:         "type": "bluestore"
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]:     }
Nov 29 05:16:28 compute-0 adoring_torvalds[132661]: }
Nov 29 05:16:28 compute-0 systemd[1]: libpod-b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c.scope: Deactivated successfully.
Nov 29 05:16:28 compute-0 podman[132644]: 2025-11-29 05:16:28.491197952 +0000 UTC m=+1.233650880 container died b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:16:28 compute-0 systemd[1]: libpod-b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c.scope: Consumed 1.049s CPU time.
Nov 29 05:16:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ad13e39c417c5673e46ab0071ec6e0a8c5d03ad7f3d5f9d2165b07cb7fff4ff-merged.mount: Deactivated successfully.
Nov 29 05:16:28 compute-0 podman[132644]: 2025-11-29 05:16:28.554521119 +0000 UTC m=+1.296974027 container remove b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 05:16:28 compute-0 systemd[1]: libpod-conmon-b6567b4fa4eadf2e2601641742d41aac9a7adfbdf76d8d6675ae50426425cd1c.scope: Deactivated successfully.
Nov 29 05:16:28 compute-0 sudo[132537]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:28 compute-0 ceph-mon[75176]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:16:28 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:16:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:16:28 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:16:28 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 1e934f82-d5e1-45a1-868b-903e9770c140 does not exist
Nov 29 05:16:28 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 8f7576f4-edd1-4755-ad40-5c2085928517 does not exist
Nov 29 05:16:28 compute-0 sudo[132705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:16:28 compute-0 sudo[132705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:28 compute-0 sudo[132705]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:28 compute-0 sudo[132730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:16:28 compute-0 sudo[132730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:16:28 compute-0 sudo[132730]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:29 compute-0 sshd-session[71419]: Received disconnect from 38.102.83.113 port 57408:11: disconnected by user
Nov 29 05:16:29 compute-0 sshd-session[71419]: Disconnected from user zuul 38.102.83.113 port 57408
Nov 29 05:16:29 compute-0 sshd-session[71416]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:16:29 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 05:16:29 compute-0 systemd[1]: session-17.scope: Consumed 1min 25.967s CPU time.
Nov 29 05:16:29 compute-0 systemd-logind[793]: Session 17 logged out. Waiting for processes to exit.
Nov 29 05:16:29 compute-0 systemd-logind[793]: Removed session 17.
Nov 29 05:16:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:16:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:16:30 compute-0 sshd-session[132755]: Accepted publickey for zuul from 192.168.122.30 port 53668 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:16:30 compute-0 systemd-logind[793]: New session 41 of user zuul.
Nov 29 05:16:30 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 29 05:16:30 compute-0 sshd-session[132755]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:16:30 compute-0 ceph-mon[75176]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:31 compute-0 python3.9[132908]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:16:32 compute-0 ceph-mon[75176]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:32 compute-0 sudo[133062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aligwkpzaadpdwemvlkrtopnjmhajyvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393392.3340013-32-264859147361943/AnsiballZ_systemd.py'
Nov 29 05:16:32 compute-0 sudo[133062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:33 compute-0 python3.9[133064]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 05:16:33 compute-0 sudo[133062]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:33 compute-0 sudo[133216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peosteaurjkwijpgbakdxjcqhhajuybp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393393.5880497-40-130762992585412/AnsiballZ_systemd.py'
Nov 29 05:16:34 compute-0 sudo[133216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:34 compute-0 python3.9[133218]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:16:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:34 compute-0 sudo[133216]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:34 compute-0 ceph-mon[75176]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:35 compute-0 sudo[133369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zluznmfbxmtldlbhrzvayvqjxgjfjrjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393394.6233704-49-19055220327575/AnsiballZ_command.py'
Nov 29 05:16:35 compute-0 sudo[133369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:35 compute-0 python3.9[133371]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:16:35 compute-0 sudo[133369]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:36 compute-0 sudo[133522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnjhmekezubzvciborqcbxsdmlqoieap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393395.6132526-57-231830669442116/AnsiballZ_stat.py'
Nov 29 05:16:36 compute-0 sudo[133522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:36 compute-0 python3.9[133524]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:16:36 compute-0 sudo[133522]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:36 compute-0 ceph-mon[75176]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:37 compute-0 sudo[133674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lchkyehcxdruottizrihmakqugnnqnsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393396.7145898-66-97458687315355/AnsiballZ_file.py'
Nov 29 05:16:37 compute-0 sudo[133674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:37 compute-0 python3.9[133676]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:16:37 compute-0 sudo[133674]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:37 compute-0 sshd-session[132758]: Connection closed by 192.168.122.30 port 53668
Nov 29 05:16:37 compute-0 sshd-session[132755]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:16:37 compute-0 systemd-logind[793]: Session 41 logged out. Waiting for processes to exit.
Nov 29 05:16:37 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 05:16:37 compute-0 systemd[1]: session-41.scope: Consumed 4.589s CPU time.
Nov 29 05:16:37 compute-0 systemd-logind[793]: Removed session 41.
Nov 29 05:16:38 compute-0 ceph-mon[75176]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:40 compute-0 ceph-mon[75176]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:16:41
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.control', '.rgw.root', 'backups', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:16:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:42 compute-0 ceph-mon[75176]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:43 compute-0 sshd-session[133702]: Accepted publickey for zuul from 192.168.122.30 port 53872 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:16:43 compute-0 systemd-logind[793]: New session 42 of user zuul.
Nov 29 05:16:43 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 29 05:16:43 compute-0 sshd-session[133702]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:16:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:44 compute-0 python3.9[133855]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:16:44 compute-0 ceph-mon[75176]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:45 compute-0 sudo[134009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmiedzrcmstepsfnipcfnayyghnniruo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393405.2240684-34-174972369480972/AnsiballZ_setup.py'
Nov 29 05:16:45 compute-0 sudo[134009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:45 compute-0 python3.9[134011]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:16:46 compute-0 sudo[134009]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:46 compute-0 sudo[134093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iklgryqsdjfztjtxvrplczeegriamazw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393405.2240684-34-174972369480972/AnsiballZ_dnf.py'
Nov 29 05:16:46 compute-0 sudo[134093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:16:46 compute-0 ceph-mon[75176]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:46 compute-0 python3.9[134095]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 05:16:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:48 compute-0 sudo[134093]: pam_unix(sudo:session): session closed for user root
Nov 29 05:16:48 compute-0 ceph-mon[75176]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:48 compute-0 python3.9[134246]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:16:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:50 compute-0 python3.9[134397]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 05:16:50 compute-0 ceph-mon[75176]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:16:51 compute-0 python3.9[134547]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:16:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:52 compute-0 python3.9[134698]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:16:52 compute-0 sshd-session[133705]: Connection closed by 192.168.122.30 port 53872
Nov 29 05:16:52 compute-0 sshd-session[133702]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:16:52 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 05:16:52 compute-0 systemd[1]: session-42.scope: Consumed 6.452s CPU time.
Nov 29 05:16:52 compute-0 systemd-logind[793]: Session 42 logged out. Waiting for processes to exit.
Nov 29 05:16:52 compute-0 systemd-logind[793]: Removed session 42.
Nov 29 05:16:52 compute-0 ceph-mon[75176]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:54 compute-0 ceph-mon[75176]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:56 compute-0 ceph-mon[75176]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:57 compute-0 sshd-session[134724]: Accepted publickey for zuul from 192.168.122.30 port 47334 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:16:57 compute-0 systemd-logind[793]: New session 43 of user zuul.
Nov 29 05:16:57 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 29 05:16:57 compute-0 sshd-session[134724]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:16:58 compute-0 ceph-mon[75176]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:16:59 compute-0 python3.9[134877]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:16:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:16:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:00 compute-0 sudo[135031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vndwfxfdrcmuinyoqmgwlowofwnymxye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393420.3135598-50-48295218328138/AnsiballZ_file.py'
Nov 29 05:17:00 compute-0 sudo[135031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:00 compute-0 ceph-mon[75176]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:00 compute-0 python3.9[135033]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:01 compute-0 sudo[135031]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:01 compute-0 sudo[135183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfybtziqszxzikhjbdmqtkwiesnplxyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393421.1774635-50-80150233153209/AnsiballZ_file.py'
Nov 29 05:17:01 compute-0 sudo[135183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:01 compute-0 python3.9[135185]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:01 compute-0 sudo[135183]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:02 compute-0 sudo[135335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghppashgouyybabvxrqqejsuzsiqlenh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393421.8743083-65-75474468439612/AnsiballZ_stat.py'
Nov 29 05:17:02 compute-0 sudo[135335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:02 compute-0 python3.9[135337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:02 compute-0 sudo[135335]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:02 compute-0 ceph-mon[75176]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:03 compute-0 sudo[135458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvkkpbelrmdydaaszsaijbgzvbigdzlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393421.8743083-65-75474468439612/AnsiballZ_copy.py'
Nov 29 05:17:03 compute-0 sudo[135458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:03 compute-0 python3.9[135460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393421.8743083-65-75474468439612/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d964e29446a15bf219d1f39a0bcf7adda320f9e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:03 compute-0 sudo[135458]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:03 compute-0 sudo[135610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bljolyivvamruetwqycozxfpzjxgxdij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393423.549431-65-180969507387554/AnsiballZ_stat.py'
Nov 29 05:17:03 compute-0 sudo[135610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:04 compute-0 python3.9[135612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:04 compute-0 sudo[135610]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:04 compute-0 sudo[135733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmxmxehhgwhifuppmqyafbgdufsobvpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393423.549431-65-180969507387554/AnsiballZ_copy.py'
Nov 29 05:17:04 compute-0 sudo[135733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:04 compute-0 python3.9[135735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393423.549431-65-180969507387554/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=077366a36a0310a88727ebecf6959a48ad4186c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:04 compute-0 sudo[135733]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:04 compute-0 ceph-mon[75176]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:05 compute-0 sudo[135885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlklfvprfdbrfowoqctkdfptutsszdet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393424.9073458-65-120006318862557/AnsiballZ_stat.py'
Nov 29 05:17:05 compute-0 sudo[135885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:05 compute-0 python3.9[135887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:05 compute-0 sudo[135885]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:05 compute-0 sudo[136008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zikavoouvqdmjntaaxkwvvaqrcuqbvkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393424.9073458-65-120006318862557/AnsiballZ_copy.py'
Nov 29 05:17:05 compute-0 sudo[136008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:06 compute-0 python3.9[136010]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393424.9073458-65-120006318862557/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3c2b03cad198e356b4c3ecd33d00b02843b0c2f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:06 compute-0 sudo[136008]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:06 compute-0 sudo[136160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbsrdgbiurigycscfqkxfifczxwcijva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393426.3515632-109-254223557347473/AnsiballZ_file.py'
Nov 29 05:17:06 compute-0 sudo[136160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:06 compute-0 ceph-mon[75176]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:06 compute-0 python3.9[136162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:06 compute-0 sudo[136160]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:07 compute-0 sudo[136312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chwgahazoilpadzshclltzjjueqyuvhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393427.106153-109-30941117734253/AnsiballZ_file.py'
Nov 29 05:17:07 compute-0 sudo[136312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:07 compute-0 python3.9[136314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:07 compute-0 sudo[136312]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:08 compute-0 sudo[136464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-latuxsuzvpdaiogrkectwnuydxhmnkts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393427.972592-124-52884175136741/AnsiballZ_stat.py'
Nov 29 05:17:08 compute-0 sudo[136464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:08 compute-0 python3.9[136466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:08 compute-0 sudo[136464]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:08 compute-0 ceph-mon[75176]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:09 compute-0 sudo[136587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgrmojzurpftorvvfwnaofkfxmmmjiyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393427.972592-124-52884175136741/AnsiballZ_copy.py'
Nov 29 05:17:09 compute-0 sudo[136587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:09 compute-0 python3.9[136589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393427.972592-124-52884175136741/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0d6d35d117547aaf5ddee29a6d0a529d82aeb93b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:09 compute-0 sudo[136587]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:09 compute-0 sudo[136739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-welalxzasnivrmzfkndqmnnmnztmagnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393429.5179088-124-240443419867000/AnsiballZ_stat.py'
Nov 29 05:17:09 compute-0 sudo[136739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:10 compute-0 python3.9[136741]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:10 compute-0 sudo[136739]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:10 compute-0 sudo[136862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfkratsyacpwusbzgjduhgjgxxafofhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393429.5179088-124-240443419867000/AnsiballZ_copy.py'
Nov 29 05:17:10 compute-0 sudo[136862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:10 compute-0 python3.9[136864]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393429.5179088-124-240443419867000/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=eca369a90e5944c1c3ae7c2351662e846dddb3e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:10 compute-0 sudo[136862]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:10 compute-0 ceph-mon[75176]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:11 compute-0 sudo[137014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zghorgdsdzajrjzowomrpnctgrlchzfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393430.7847896-124-258844760852483/AnsiballZ_stat.py'
Nov 29 05:17:11 compute-0 sudo[137014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:11 compute-0 python3.9[137016]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:11 compute-0 sudo[137014]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:17:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:17:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:17:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:17:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:17:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:17:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:11 compute-0 sudo[137137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tknthbcqduhstzadydzqxrgwgkxlzmpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393430.7847896-124-258844760852483/AnsiballZ_copy.py'
Nov 29 05:17:11 compute-0 sudo[137137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:11 compute-0 python3.9[137139]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393430.7847896-124-258844760852483/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0deb4dacf3bc7bb1197ae21aac4c45bcb95c3d1e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:12 compute-0 sudo[137137]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:12 compute-0 sudo[137289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bygwqjhmicmhhxefgzotscmhxtbscodn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393432.2502105-168-86161851534246/AnsiballZ_file.py'
Nov 29 05:17:12 compute-0 sudo[137289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:12 compute-0 ceph-mon[75176]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:12 compute-0 python3.9[137291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:12 compute-0 sudo[137289]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:13 compute-0 sudo[137441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbhrfuvgyjywnphoqmbczulrrsmowkxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393433.0916805-168-170031338201664/AnsiballZ_file.py'
Nov 29 05:17:13 compute-0 sudo[137441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:13 compute-0 python3.9[137443]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:13 compute-0 sudo[137441]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:14 compute-0 sudo[137593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjvausrmxomyzdzzanlajaugkazncrns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393433.9195518-183-1233429827651/AnsiballZ_stat.py'
Nov 29 05:17:14 compute-0 sudo[137593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:14 compute-0 python3.9[137595]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:14 compute-0 sudo[137593]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:14 compute-0 sudo[137716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbjxbrmugyihszcmyfwanofutvxkufqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393433.9195518-183-1233429827651/AnsiballZ_copy.py'
Nov 29 05:17:14 compute-0 sudo[137716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:14 compute-0 ceph-mon[75176]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:15 compute-0 python3.9[137718]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393433.9195518-183-1233429827651/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d6414211c7944ea45bbfc0b627e51d384577f8d3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:15 compute-0 sudo[137716]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:15 compute-0 sudo[137869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zozcqaicdjwbllloavlrwmvyinwnxzct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393435.2714267-183-244045972723734/AnsiballZ_stat.py'
Nov 29 05:17:15 compute-0 sudo[137869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:15 compute-0 python3.9[137871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:15 compute-0 sudo[137869]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:16 compute-0 sudo[137992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fejxxgmimmsnkosylrkybvpbqnvtrsdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393435.2714267-183-244045972723734/AnsiballZ_copy.py'
Nov 29 05:17:16 compute-0 sudo[137992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:16 compute-0 python3.9[137994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393435.2714267-183-244045972723734/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=eca369a90e5944c1c3ae7c2351662e846dddb3e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:16 compute-0 sudo[137992]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:16 compute-0 ceph-mon[75176]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:17 compute-0 sudo[138145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkgwhohpieexehwwkmkjtyxraiybllya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393436.7076762-183-180650248438107/AnsiballZ_stat.py'
Nov 29 05:17:17 compute-0 sudo[138145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:17 compute-0 python3.9[138147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:17 compute-0 sudo[138145]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:17 compute-0 sshd-session[137719]: Received disconnect from 120.48.175.69 port 43782:11: Bye Bye [preauth]
Nov 29 05:17:17 compute-0 sshd-session[137719]: Disconnected from authenticating user root 120.48.175.69 port 43782 [preauth]
Nov 29 05:17:17 compute-0 sudo[138268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfxrdmmagzsqmgshpbblougfpciophcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393436.7076762-183-180650248438107/AnsiballZ_copy.py'
Nov 29 05:17:17 compute-0 sudo[138268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:18 compute-0 python3.9[138270]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393436.7076762-183-180650248438107/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=81b2a509a9ed127899e3697de7de1afb4726a4d9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:18 compute-0 sudo[138268]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:18 compute-0 ceph-mon[75176]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:19 compute-0 sudo[138420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyxuiacubefgxqkglujqxkiqttxqrtnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393438.9101534-243-211338738332497/AnsiballZ_file.py'
Nov 29 05:17:19 compute-0 sudo[138420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:19 compute-0 python3.9[138422]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:19 compute-0 sudo[138420]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:20 compute-0 sudo[138572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyemhzfzlbouyzlfqwpfsglhjwupyoft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393439.690746-251-52716413697474/AnsiballZ_stat.py'
Nov 29 05:17:20 compute-0 sudo[138572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:20 compute-0 python3.9[138574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:20 compute-0 sudo[138572]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:20 compute-0 sudo[138695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scobzfgfxacnmcxsgsbsmiygemonnycq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393439.690746-251-52716413697474/AnsiballZ_copy.py'
Nov 29 05:17:20 compute-0 sudo[138695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:20 compute-0 ceph-mon[75176]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:20 compute-0 python3.9[138697]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393439.690746-251-52716413697474/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:20 compute-0 sudo[138695]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:21 compute-0 sudo[138847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bogufeotbspvhafqjqkhwgghaefrwbcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393441.1966338-267-228647343471957/AnsiballZ_file.py'
Nov 29 05:17:21 compute-0 sudo[138847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:21 compute-0 python3.9[138849]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:21 compute-0 sudo[138847]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:22 compute-0 sudo[138999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfevhgohhctogtyzhdacasbjtdigmjwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393441.9956808-275-179464019831335/AnsiballZ_stat.py'
Nov 29 05:17:22 compute-0 sudo[138999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:22 compute-0 python3.9[139001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:22 compute-0 sudo[138999]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:22 compute-0 ceph-mon[75176]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:22 compute-0 sudo[139122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwkuulzqfvxvaprpvxuydtespwrrijhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393441.9956808-275-179464019831335/AnsiballZ_copy.py'
Nov 29 05:17:22 compute-0 sudo[139122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:23 compute-0 python3.9[139124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393441.9956808-275-179464019831335/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:23 compute-0 sudo[139122]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:23 compute-0 sudo[139274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btvxmjvhywiqvmlujbbuqwlgaxwgktmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393443.469208-291-129318522731091/AnsiballZ_file.py'
Nov 29 05:17:23 compute-0 sudo[139274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:24 compute-0 python3.9[139276]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:24 compute-0 sudo[139274]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:24 compute-0 sudo[139426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swngaqcfkvigvblpopzbdkhyalsbtxjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393444.3369088-299-270532558142390/AnsiballZ_stat.py'
Nov 29 05:17:24 compute-0 sudo[139426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:24 compute-0 python3.9[139428]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:24 compute-0 sudo[139426]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:24 compute-0 ceph-mon[75176]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:25 compute-0 sudo[139549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyorwvldkxxbmfbuhinxfjfykrftitjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393444.3369088-299-270532558142390/AnsiballZ_copy.py'
Nov 29 05:17:25 compute-0 sudo[139549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:25 compute-0 python3.9[139551]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393444.3369088-299-270532558142390/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:25 compute-0 sudo[139549]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:26 compute-0 sudo[139701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfrkowzqvmyfmhxokoaqafqqcolidfev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393445.734829-315-68744109681208/AnsiballZ_file.py'
Nov 29 05:17:26 compute-0 sudo[139701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:26 compute-0 python3.9[139703]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:26 compute-0 sudo[139701]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:26 compute-0 sudo[139853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfjckhhhuhkjwxbndvqfaxyaetrjmjnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393446.4727738-323-8113773757378/AnsiballZ_stat.py'
Nov 29 05:17:26 compute-0 sudo[139853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:26 compute-0 ceph-mon[75176]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:26 compute-0 python3.9[139855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:26 compute-0 sudo[139853]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:27 compute-0 sudo[139976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atkixpqxaecfcjhmfuzwoadxflmvxtdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393446.4727738-323-8113773757378/AnsiballZ_copy.py'
Nov 29 05:17:27 compute-0 sudo[139976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:27 compute-0 python3.9[139978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393446.4727738-323-8113773757378/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:27 compute-0 sudo[139976]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:28 compute-0 sudo[140128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqwrodcukhhsrqazcpuuapptubiqqdrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393447.9548266-339-263878382495342/AnsiballZ_file.py'
Nov 29 05:17:28 compute-0 sudo[140128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:28 compute-0 python3.9[140130]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:28 compute-0 sudo[140128]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:28 compute-0 sudo[140199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:17:28 compute-0 sudo[140199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:28 compute-0 sudo[140199]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:28 compute-0 ceph-mon[75176]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:28 compute-0 sudo[140255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:17:28 compute-0 sudo[140255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:28 compute-0 sudo[140255]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:28 compute-0 sudo[140288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:17:28 compute-0 sudo[140288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:28 compute-0 sudo[140288]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:29 compute-0 sudo[140362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qozmqhxftxhylnzocauavvgrvnrmuqos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393448.745382-347-20400533105468/AnsiballZ_stat.py'
Nov 29 05:17:29 compute-0 sudo[140362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:29 compute-0 sudo[140349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:17:29 compute-0 sudo[140349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:29 compute-0 python3.9[140380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:29 compute-0 sudo[140362]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:29 compute-0 sudo[140349]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:17:29 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:17:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:17:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:17:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:17:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:17:29 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 097ac8b0-e33e-489d-8d9b-551fb465aa05 does not exist
Nov 29 05:17:29 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a2c247fc-09fa-47c6-8c2b-8072c2bd1db1 does not exist
Nov 29 05:17:29 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ec36492f-78a7-4608-94e7-a405138dfc61 does not exist
Nov 29 05:17:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:17:29 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:17:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:17:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:17:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:17:29 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:17:29 compute-0 sudo[140493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:17:29 compute-0 sudo[140493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:29 compute-0 sudo[140493]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:29 compute-0 sudo[140573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zehkqpgmrixaovqlhmesrxnjtifvvwiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393448.745382-347-20400533105468/AnsiballZ_copy.py'
Nov 29 05:17:29 compute-0 sudo[140573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:29 compute-0 sudo[140545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:17:29 compute-0 sudo[140545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:29 compute-0 sudo[140545]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:29 compute-0 sudo[140588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:17:29 compute-0 sudo[140588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:29 compute-0 sudo[140588]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:29 compute-0 sudo[140613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:17:29 compute-0 sudo[140613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:29 compute-0 python3.9[140585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393448.745382-347-20400533105468/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:29 compute-0 sudo[140573]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:17:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:17:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:17:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:17:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:17:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:17:30 compute-0 podman[140728]: 2025-11-29 05:17:30.159409441 +0000 UTC m=+0.057386720 container create 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:17:30 compute-0 systemd[1]: Started libpod-conmon-14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2.scope.
Nov 29 05:17:30 compute-0 podman[140728]: 2025-11-29 05:17:30.131543517 +0000 UTC m=+0.029520866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:17:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:17:30 compute-0 podman[140728]: 2025-11-29 05:17:30.247297948 +0000 UTC m=+0.145275227 container init 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:17:30 compute-0 podman[140728]: 2025-11-29 05:17:30.255426337 +0000 UTC m=+0.153403596 container start 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 05:17:30 compute-0 podman[140728]: 2025-11-29 05:17:30.258707098 +0000 UTC m=+0.156684357 container attach 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:17:30 compute-0 angry_hawking[140789]: 167 167
Nov 29 05:17:30 compute-0 systemd[1]: libpod-14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2.scope: Deactivated successfully.
Nov 29 05:17:30 compute-0 conmon[140789]: conmon 14fa43378914b27f08e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2.scope/container/memory.events
Nov 29 05:17:30 compute-0 podman[140728]: 2025-11-29 05:17:30.262924562 +0000 UTC m=+0.160901861 container died 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:17:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e6c8dcafc9e38f7d6579a6f3933d89eff61be7810bd8db79900c460d90ae2bf-merged.mount: Deactivated successfully.
Nov 29 05:17:30 compute-0 podman[140728]: 2025-11-29 05:17:30.303954318 +0000 UTC m=+0.201931587 container remove 14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hawking, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:17:30 compute-0 systemd[1]: libpod-conmon-14fa43378914b27f08e2cfdace3e83fcb8e2388aedcdcb1bf49bff16351473a2.scope: Deactivated successfully.
Nov 29 05:17:30 compute-0 sudo[140857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-workzjzgsbzvvbilrmouwpowkzovbbfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393450.0564315-363-264596421572670/AnsiballZ_file.py'
Nov 29 05:17:30 compute-0 sudo[140857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:30 compute-0 podman[140865]: 2025-11-29 05:17:30.458322227 +0000 UTC m=+0.042392152 container create c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 05:17:30 compute-0 systemd[1]: Started libpod-conmon-c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d.scope.
Nov 29 05:17:30 compute-0 podman[140865]: 2025-11-29 05:17:30.436986613 +0000 UTC m=+0.021056538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:17:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:30 compute-0 podman[140865]: 2025-11-29 05:17:30.582652448 +0000 UTC m=+0.166722463 container init c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:17:30 compute-0 podman[140865]: 2025-11-29 05:17:30.593949255 +0000 UTC m=+0.178019150 container start c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 05:17:30 compute-0 python3.9[140859]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:30 compute-0 podman[140865]: 2025-11-29 05:17:30.597758669 +0000 UTC m=+0.181828594 container attach c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:17:30 compute-0 sudo[140857]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:30 compute-0 ceph-mon[75176]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:31 compute-0 sudo[141038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzbzroebqvfohmezgrdmuascuvuxdlpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393450.841335-371-43642366881700/AnsiballZ_stat.py'
Nov 29 05:17:31 compute-0 sudo[141038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:31 compute-0 python3.9[141042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:31 compute-0 sudo[141038]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:31 compute-0 affectionate_lehmann[140883]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:17:31 compute-0 affectionate_lehmann[140883]: --> relative data size: 1.0
Nov 29 05:17:31 compute-0 affectionate_lehmann[140883]: --> All data devices are unavailable
Nov 29 05:17:31 compute-0 systemd[1]: libpod-c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d.scope: Deactivated successfully.
Nov 29 05:17:31 compute-0 podman[140865]: 2025-11-29 05:17:31.71135604 +0000 UTC m=+1.295425935 container died c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 05:17:31 compute-0 systemd[1]: libpod-c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d.scope: Consumed 1.059s CPU time.
Nov 29 05:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6570cbd5b169a250ef3f69afa9dbefe00a1156f354e3a078841549a3b81a113a-merged.mount: Deactivated successfully.
Nov 29 05:17:31 compute-0 podman[140865]: 2025-11-29 05:17:31.794055889 +0000 UTC m=+1.378125784 container remove c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:17:31 compute-0 systemd[1]: libpod-conmon-c103609a1821440bfe64703b482af5787c0baaf59e61098b9b297a4f8ee2413d.scope: Deactivated successfully.
Nov 29 05:17:31 compute-0 sudo[140613]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:31 compute-0 sudo[141154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:17:31 compute-0 sudo[141154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:31 compute-0 sudo[141154]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:31 compute-0 sudo[141222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxowyfjzcoluqfdqvzjekhhdigygphrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393450.841335-371-43642366881700/AnsiballZ_copy.py'
Nov 29 05:17:31 compute-0 sudo[141222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:31 compute-0 sudo[141221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:17:32 compute-0 sudo[141221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:32 compute-0 sudo[141221]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:32 compute-0 sudo[141249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:17:32 compute-0 sudo[141249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:32 compute-0 sudo[141249]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:32 compute-0 sudo[141274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:17:32 compute-0 sudo[141274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:32 compute-0 python3.9[141231]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393450.841335-371-43642366881700/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=bbd42c7a2d8dc3ccd2c5e77e3911bd7d9d2d1dde backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:32 compute-0 sudo[141222]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:32 compute-0 podman[141363]: 2025-11-29 05:17:32.541426803 +0000 UTC m=+0.061931591 container create e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:17:32 compute-0 systemd[1]: Started libpod-conmon-e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e.scope.
Nov 29 05:17:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:17:32 compute-0 podman[141363]: 2025-11-29 05:17:32.516317916 +0000 UTC m=+0.036822744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:17:32 compute-0 sshd-session[134727]: Connection closed by 192.168.122.30 port 47334
Nov 29 05:17:32 compute-0 sshd-session[134724]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:17:32 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 05:17:32 compute-0 systemd[1]: session-43.scope: Consumed 25.979s CPU time.
Nov 29 05:17:32 compute-0 systemd-logind[793]: Session 43 logged out. Waiting for processes to exit.
Nov 29 05:17:32 compute-0 systemd-logind[793]: Removed session 43.
Nov 29 05:17:32 compute-0 podman[141363]: 2025-11-29 05:17:32.623433446 +0000 UTC m=+0.143938234 container init e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:17:32 compute-0 podman[141363]: 2025-11-29 05:17:32.634389154 +0000 UTC m=+0.154893922 container start e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 05:17:32 compute-0 podman[141363]: 2025-11-29 05:17:32.63786681 +0000 UTC m=+0.158371578 container attach e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 05:17:32 compute-0 intelligent_black[141379]: 167 167
Nov 29 05:17:32 compute-0 systemd[1]: libpod-e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e.scope: Deactivated successfully.
Nov 29 05:17:32 compute-0 podman[141363]: 2025-11-29 05:17:32.640787441 +0000 UTC m=+0.161292209 container died e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e498a32eaa1e46640c522d96675f346660d511cab7fd69e0883ca80adcd1731-merged.mount: Deactivated successfully.
Nov 29 05:17:32 compute-0 podman[141363]: 2025-11-29 05:17:32.676518699 +0000 UTC m=+0.197023467 container remove e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 05:17:32 compute-0 systemd[1]: libpod-conmon-e6c81ab88f855559827776fadf224a9019801eeed0f03c328071c184e7a26e2e.scope: Deactivated successfully.
Nov 29 05:17:32 compute-0 podman[141404]: 2025-11-29 05:17:32.830766214 +0000 UTC m=+0.044914303 container create 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:17:32 compute-0 systemd[1]: Started libpod-conmon-028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85.scope.
Nov 29 05:17:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d75660aa626ca017bbf2f7da9e84d1741dcd729f0294d0c95c80129afb29316/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d75660aa626ca017bbf2f7da9e84d1741dcd729f0294d0c95c80129afb29316/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d75660aa626ca017bbf2f7da9e84d1741dcd729f0294d0c95c80129afb29316/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d75660aa626ca017bbf2f7da9e84d1741dcd729f0294d0c95c80129afb29316/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:32 compute-0 podman[141404]: 2025-11-29 05:17:32.905779225 +0000 UTC m=+0.119927354 container init 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:17:32 compute-0 podman[141404]: 2025-11-29 05:17:32.81101927 +0000 UTC m=+0.025167389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:17:32 compute-0 podman[141404]: 2025-11-29 05:17:32.914371065 +0000 UTC m=+0.128519164 container start 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:17:32 compute-0 podman[141404]: 2025-11-29 05:17:32.91781811 +0000 UTC m=+0.131966219 container attach 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:17:32 compute-0 ceph-mon[75176]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:33 compute-0 angry_blackburn[141420]: {
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:     "0": [
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:         {
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "devices": [
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "/dev/loop3"
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             ],
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_name": "ceph_lv0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_size": "21470642176",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "name": "ceph_lv0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "tags": {
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.cluster_name": "ceph",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.crush_device_class": "",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.encrypted": "0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.osd_id": "0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.type": "block",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.vdo": "0"
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             },
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "type": "block",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "vg_name": "ceph_vg0"
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:         }
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:     ],
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:     "1": [
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:         {
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "devices": [
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "/dev/loop4"
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             ],
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_name": "ceph_lv1",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_size": "21470642176",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "name": "ceph_lv1",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "tags": {
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.cluster_name": "ceph",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.crush_device_class": "",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.encrypted": "0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.osd_id": "1",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.type": "block",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.vdo": "0"
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             },
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "type": "block",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "vg_name": "ceph_vg1"
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:         }
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:     ],
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:     "2": [
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:         {
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "devices": [
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "/dev/loop5"
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             ],
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_name": "ceph_lv2",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_size": "21470642176",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "name": "ceph_lv2",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "tags": {
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.cluster_name": "ceph",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.crush_device_class": "",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.encrypted": "0",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.osd_id": "2",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.type": "block",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:                 "ceph.vdo": "0"
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             },
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "type": "block",
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:             "vg_name": "ceph_vg2"
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:         }
Nov 29 05:17:33 compute-0 angry_blackburn[141420]:     ]
Nov 29 05:17:33 compute-0 angry_blackburn[141420]: }
Nov 29 05:17:33 compute-0 systemd[1]: libpod-028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85.scope: Deactivated successfully.
Nov 29 05:17:33 compute-0 podman[141404]: 2025-11-29 05:17:33.622000493 +0000 UTC m=+0.836148612 container died 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:17:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d75660aa626ca017bbf2f7da9e84d1741dcd729f0294d0c95c80129afb29316-merged.mount: Deactivated successfully.
Nov 29 05:17:33 compute-0 podman[141404]: 2025-11-29 05:17:33.691305164 +0000 UTC m=+0.905453283 container remove 028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_blackburn, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:17:33 compute-0 systemd[1]: libpod-conmon-028c4840bb1c01df69289aee5bd35f198f4ee0b18f7e721ce071c45d62ecae85.scope: Deactivated successfully.
Nov 29 05:17:33 compute-0 sudo[141274]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:33 compute-0 sudo[141442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:17:33 compute-0 sudo[141442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:33 compute-0 sudo[141442]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:33 compute-0 sudo[141467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:17:33 compute-0 sudo[141467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:33 compute-0 sudo[141467]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:33 compute-0 sudo[141492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:17:33 compute-0 sudo[141492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:33 compute-0 sudo[141492]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:33 compute-0 sudo[141517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:17:34 compute-0 sudo[141517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.336857) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454336927, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1680, "num_deletes": 252, "total_data_size": 2421058, "memory_usage": 2459112, "flush_reason": "Manual Compaction"}
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454350497, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1412791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7330, "largest_seqno": 9009, "table_properties": {"data_size": 1407233, "index_size": 2506, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16028, "raw_average_key_size": 20, "raw_value_size": 1394152, "raw_average_value_size": 1803, "num_data_blocks": 118, "num_entries": 773, "num_filter_entries": 773, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764393294, "oldest_key_time": 1764393294, "file_creation_time": 1764393454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 13689 microseconds, and 5372 cpu microseconds.
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.350552) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1412791 bytes OK
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.350574) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.352509) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.352531) EVENT_LOG_v1 {"time_micros": 1764393454352523, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.352551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2413582, prev total WAL file size 2413582, number of live WAL files 2.
Nov 29 05:17:34 compute-0 podman[141584]: 2025-11-29 05:17:34.352802519 +0000 UTC m=+0.047800964 container create 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.353856) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1379KB)], [20(7305KB)]
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454353918, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8893809, "oldest_snapshot_seqno": -1}
Nov 29 05:17:34 compute-0 systemd[1]: Started libpod-conmon-4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3.scope.
Nov 29 05:17:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3384 keys, 6952343 bytes, temperature: kUnknown
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454420656, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6952343, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6926422, "index_size": 16340, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 80945, "raw_average_key_size": 23, "raw_value_size": 6861983, "raw_average_value_size": 2027, "num_data_blocks": 725, "num_entries": 3384, "num_filter_entries": 3384, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764393454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:17:34 compute-0 podman[141584]: 2025-11-29 05:17:34.327249282 +0000 UTC m=+0.022247797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.420891) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6952343 bytes
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.422170) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.1 rd, 104.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.1 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(11.2) write-amplify(4.9) OK, records in: 3826, records dropped: 442 output_compression: NoCompression
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.422194) EVENT_LOG_v1 {"time_micros": 1764393454422183, "job": 6, "event": "compaction_finished", "compaction_time_micros": 66796, "compaction_time_cpu_micros": 29807, "output_level": 6, "num_output_files": 1, "total_output_size": 6952343, "num_input_records": 3826, "num_output_records": 3384, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454422672, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393454424463, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.353750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.424509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.424514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.424516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.424518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:17:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:17:34.424519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:17:34 compute-0 podman[141584]: 2025-11-29 05:17:34.436168705 +0000 UTC m=+0.131167240 container init 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 05:17:34 compute-0 podman[141584]: 2025-11-29 05:17:34.44366831 +0000 UTC m=+0.138666785 container start 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:17:34 compute-0 distracted_bose[141600]: 167 167
Nov 29 05:17:34 compute-0 podman[141584]: 2025-11-29 05:17:34.447306708 +0000 UTC m=+0.142305183 container attach 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:17:34 compute-0 systemd[1]: libpod-4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3.scope: Deactivated successfully.
Nov 29 05:17:34 compute-0 podman[141584]: 2025-11-29 05:17:34.448901437 +0000 UTC m=+0.143899902 container died 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 05:17:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab125ec6bfd18c1dc451185488961f5c999b0da71214b5f8d1f83afdf416c250-merged.mount: Deactivated successfully.
Nov 29 05:17:34 compute-0 podman[141584]: 2025-11-29 05:17:34.485531267 +0000 UTC m=+0.180529732 container remove 4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bose, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:17:34 compute-0 systemd[1]: libpod-conmon-4f7684590510e1d475ba7c4721efbe8c4ae4c1afad8a885b435ba701a30e24f3.scope: Deactivated successfully.
Nov 29 05:17:34 compute-0 podman[141624]: 2025-11-29 05:17:34.721104949 +0000 UTC m=+0.069070597 container create e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:17:34 compute-0 systemd[1]: Started libpod-conmon-e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac.scope.
Nov 29 05:17:34 compute-0 podman[141624]: 2025-11-29 05:17:34.691009389 +0000 UTC m=+0.038975127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:17:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d531b9a43c509e16d587f1b36ba811c4ec081c50333c40951255c3f71384e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d531b9a43c509e16d587f1b36ba811c4ec081c50333c40951255c3f71384e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d531b9a43c509e16d587f1b36ba811c4ec081c50333c40951255c3f71384e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d531b9a43c509e16d587f1b36ba811c4ec081c50333c40951255c3f71384e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:17:34 compute-0 podman[141624]: 2025-11-29 05:17:34.824519157 +0000 UTC m=+0.172484835 container init e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 05:17:34 compute-0 podman[141624]: 2025-11-29 05:17:34.84096874 +0000 UTC m=+0.188934428 container start e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 05:17:34 compute-0 podman[141624]: 2025-11-29 05:17:34.845516712 +0000 UTC m=+0.193482350 container attach e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:17:34 compute-0 ceph-mon[75176]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:35 compute-0 friendly_jemison[141640]: {
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "osd_id": 0,
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "type": "bluestore"
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:     },
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "osd_id": 1,
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "type": "bluestore"
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:     },
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "osd_id": 2,
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:         "type": "bluestore"
Nov 29 05:17:35 compute-0 friendly_jemison[141640]:     }
Nov 29 05:17:35 compute-0 friendly_jemison[141640]: }
Nov 29 05:17:35 compute-0 systemd[1]: libpod-e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac.scope: Deactivated successfully.
Nov 29 05:17:35 compute-0 systemd[1]: libpod-e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac.scope: Consumed 1.078s CPU time.
Nov 29 05:17:35 compute-0 podman[141673]: 2025-11-29 05:17:35.958541509 +0000 UTC m=+0.029076875 container died e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:17:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-17d531b9a43c509e16d587f1b36ba811c4ec081c50333c40951255c3f71384e2-merged.mount: Deactivated successfully.
Nov 29 05:17:36 compute-0 podman[141673]: 2025-11-29 05:17:36.013352514 +0000 UTC m=+0.083887860 container remove e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_jemison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:17:36 compute-0 systemd[1]: libpod-conmon-e4783bd30b21e2deba16fd1d3a34f8ff054436772df58fb2a44ed059bc8691ac.scope: Deactivated successfully.
Nov 29 05:17:36 compute-0 sudo[141517]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:17:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:17:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:17:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:17:36 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 0f0c2f49-f302-4084-98ee-f2aa0549138c does not exist
Nov 29 05:17:36 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 57150611-c6a5-4886-b05c-dae9c942acbe does not exist
Nov 29 05:17:36 compute-0 sudo[141688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:17:36 compute-0 sudo[141688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:36 compute-0 sudo[141688]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:36 compute-0 sudo[141713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:17:36 compute-0 sudo[141713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:17:36 compute-0 sudo[141713]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:37 compute-0 ceph-mon[75176]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:17:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:17:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:38 compute-0 sshd-session[141738]: Accepted publickey for zuul from 192.168.122.30 port 60340 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:17:38 compute-0 systemd-logind[793]: New session 44 of user zuul.
Nov 29 05:17:38 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 29 05:17:38 compute-0 sshd-session[141738]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:17:39 compute-0 ceph-mon[75176]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:39 compute-0 sudo[141891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uymzhlfcuhvpyhxabphshwysowysuqnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393459.0712829-22-88813095918694/AnsiballZ_file.py'
Nov 29 05:17:39 compute-0 sudo[141891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:39 compute-0 python3.9[141893]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:39 compute-0 sudo[141891]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:40 compute-0 sudo[142043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqpayauxrxhjamxvajdcczhhtelpmtke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393460.0237696-34-203625331488709/AnsiballZ_stat.py'
Nov 29 05:17:40 compute-0 sudo[142043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:40 compute-0 python3.9[142045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:40 compute-0 sudo[142043]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:41 compute-0 ceph-mon[75176]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:41 compute-0 sudo[142166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkbtuwocusopibeozpacvabrpwxcxvkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393460.0237696-34-203625331488709/AnsiballZ_copy.py'
Nov 29 05:17:41 compute-0 sudo[142166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:17:41
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'images', 'backups', 'vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:17:41 compute-0 python3.9[142168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393460.0237696-34-203625331488709/.source.conf _original_basename=ceph.conf follow=False checksum=f36dbb4697f374c5e3f0472993712ce777bfe2a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:17:41 compute-0 sudo[142166]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:17:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:41 compute-0 sudo[142318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fujelnbpduvofouphjmpyblvyxoewlhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393461.489297-34-40046112781311/AnsiballZ_stat.py'
Nov 29 05:17:41 compute-0 sudo[142318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:41 compute-0 python3.9[142320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:17:41 compute-0 sudo[142318]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:42 compute-0 sudo[142441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orqcasvocbixasfwfkuoxfdhktlefnrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393461.489297-34-40046112781311/AnsiballZ_copy.py'
Nov 29 05:17:42 compute-0 sudo[142441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:42 compute-0 python3.9[142443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393461.489297-34-40046112781311/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=1cc9e4eb20e7af3f1c9d65ee54a3a3ef5b88c5e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:17:42 compute-0 sudo[142441]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:42 compute-0 sshd-session[141741]: Connection closed by 192.168.122.30 port 60340
Nov 29 05:17:43 compute-0 sshd-session[141738]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:17:43 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 05:17:43 compute-0 systemd[1]: session-44.scope: Consumed 2.700s CPU time.
Nov 29 05:17:43 compute-0 systemd-logind[793]: Session 44 logged out. Waiting for processes to exit.
Nov 29 05:17:43 compute-0 systemd-logind[793]: Removed session 44.
Nov 29 05:17:43 compute-0 ceph-mon[75176]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:45 compute-0 ceph-mon[75176]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:47 compute-0 ceph-mon[75176]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:48 compute-0 sshd-session[142468]: Accepted publickey for zuul from 192.168.122.30 port 56022 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:17:48 compute-0 systemd-logind[793]: New session 45 of user zuul.
Nov 29 05:17:48 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 29 05:17:48 compute-0 sshd-session[142468]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:17:49 compute-0 ceph-mon[75176]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:49 compute-0 python3.9[142621]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:17:50 compute-0 sudo[142775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnqnwhyjmrfomcjxsrcqowgaumvmilsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393470.3311627-34-26536748782094/AnsiballZ_file.py'
Nov 29 05:17:50 compute-0 sudo[142775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:50 compute-0 python3.9[142777]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:51 compute-0 sudo[142775]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:51 compute-0 ceph-mon[75176]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:17:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:51 compute-0 sudo[142927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lroqczczzfqjpuraenouxinuqdjatzxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393471.1576211-34-142011794295451/AnsiballZ_file.py'
Nov 29 05:17:51 compute-0 sudo[142927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:51 compute-0 python3.9[142929]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:17:51 compute-0 sudo[142927]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:52 compute-0 python3.9[143079]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:17:53 compute-0 ceph-mon[75176]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:53 compute-0 sudo[143229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esuhcnadgvkpxtplqdhvbitrdhlfzyue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393472.9843972-57-165693884602374/AnsiballZ_seboolean.py'
Nov 29 05:17:53 compute-0 sudo[143229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:53 compute-0 python3.9[143231]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 05:17:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:17:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2033 writes, 9029 keys, 2033 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2033 writes, 2033 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2033 writes, 9029 keys, 2033 commit groups, 1.0 writes per commit group, ingest: 11.43 MB, 0.02 MB/s
                                           Interval WAL: 2033 writes, 2033 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    110.6      0.08              0.03         3    0.026       0      0       0.0       0.0
                                             L6      1/0    6.63 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    128.7    113.2      0.12              0.05         2    0.061    7168    731       0.0       0.0
                                            Sum      1/0    6.63 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     78.8    112.1      0.20              0.08         5    0.040    7168    731       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     79.7    113.2      0.20              0.08         4    0.049    7168    731       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    128.7    113.2      0.12              0.05         2    0.061    7168    731       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    113.2      0.07              0.03         2    0.037       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.008, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556a62a271f0#2 capacity: 308.00 MB usage: 554.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 6.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(36,467.97 KB,0.148377%) FilterBlock(6,27.55 KB,0.00873417%) IndexBlock(6,59.16 KB,0.0187564%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 05:17:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:55 compute-0 ceph-mon[75176]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:55 compute-0 sudo[143229]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:56 compute-0 sudo[143386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anslmzpizpmzmcexpddgqcbkeefgxcya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393475.8754506-67-130596052884435/AnsiballZ_setup.py'
Nov 29 05:17:56 compute-0 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 05:17:56 compute-0 sudo[143386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:56 compute-0 ceph-mon[75176]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:56 compute-0 python3.9[143388]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:17:56 compute-0 sudo[143386]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:57 compute-0 sudo[143470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfhxvhpqvpexupprlbxwachqcthhydhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393475.8754506-67-130596052884435/AnsiballZ_dnf.py'
Nov 29 05:17:57 compute-0 sudo[143470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:57 compute-0 python3.9[143472]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:17:58 compute-0 ceph-mon[75176]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:58 compute-0 sudo[143470]: pam_unix(sudo:session): session closed for user root
Nov 29 05:17:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:17:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:17:59 compute-0 sshd-session[143474]: Invalid user nrk from 45.120.216.232 port 58038
Nov 29 05:17:59 compute-0 sudo[143625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqmzlyfxujanbpbyakxaqtxpnvsrjton ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393478.9547744-79-249540870220133/AnsiballZ_systemd.py'
Nov 29 05:17:59 compute-0 sudo[143625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:17:59 compute-0 sshd-session[143474]: Received disconnect from 45.120.216.232 port 58038:11: Bye Bye [preauth]
Nov 29 05:17:59 compute-0 sshd-session[143474]: Disconnected from invalid user nrk 45.120.216.232 port 58038 [preauth]
Nov 29 05:17:59 compute-0 python3.9[143627]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 05:17:59 compute-0 sudo[143625]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:00 compute-0 ceph-mon[75176]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:00 compute-0 sudo[143780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-veunpxjscytvjrcwntejbaefjwwbjdql ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764393480.1715438-87-86855489121872/AnsiballZ_edpm_nftables_snippet.py'
Nov 29 05:18:00 compute-0 sudo[143780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:00 compute-0 python3[143782]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 05:18:00 compute-0 sudo[143780]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:01 compute-0 sudo[143932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzlicvupchwgiswsjdkjpxejdvhbtulw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393481.2389908-96-213211844212465/AnsiballZ_file.py'
Nov 29 05:18:01 compute-0 sudo[143932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:01 compute-0 python3.9[143934]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:01 compute-0 sudo[143932]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:02 compute-0 ceph-mon[75176]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:02 compute-0 sudo[144084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aejaavxtdconupvkdvbwwqjtuqzzzyqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393482.053747-104-179297170743784/AnsiballZ_stat.py'
Nov 29 05:18:02 compute-0 sudo[144084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:02 compute-0 python3.9[144086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:02 compute-0 sudo[144084]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:03 compute-0 sudo[144162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwxlmotjbmmaovayveboprekkqdlgwpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393482.053747-104-179297170743784/AnsiballZ_file.py'
Nov 29 05:18:03 compute-0 sudo[144162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:03 compute-0 python3.9[144164]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:03 compute-0 sudo[144162]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:03 compute-0 sudo[144314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntmbmbfbdafenzqxyvefzssyqcstwcth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393483.6524491-116-68729424333130/AnsiballZ_stat.py'
Nov 29 05:18:03 compute-0 sudo[144314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:04 compute-0 python3.9[144316]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:04 compute-0 sudo[144314]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:04 compute-0 sudo[144392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzpsswcxcntjjawkkylftlcjentxkabi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393483.6524491-116-68729424333130/AnsiballZ_file.py'
Nov 29 05:18:04 compute-0 sudo[144392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:04 compute-0 ceph-mon[75176]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:04 compute-0 python3.9[144394]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.q7xayevk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:04 compute-0 sudo[144392]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:05 compute-0 sudo[144544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcztoerpefcehsmlbtpminkkwnkndmom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393484.8326056-128-134768124039793/AnsiballZ_stat.py'
Nov 29 05:18:05 compute-0 sudo[144544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:05 compute-0 python3.9[144546]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:05 compute-0 sudo[144544]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:05 compute-0 sudo[144622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psnyugxyoitiouugydnjvgaoaxqpbspm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393484.8326056-128-134768124039793/AnsiballZ_file.py'
Nov 29 05:18:05 compute-0 sudo[144622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:06 compute-0 python3.9[144624]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:06 compute-0 sudo[144622]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:06 compute-0 ceph-mon[75176]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:06 compute-0 sudo[144774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjekywvlpdphjyvfiqraddofnwlrezzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393486.2528803-141-278948991518308/AnsiballZ_command.py'
Nov 29 05:18:06 compute-0 sudo[144774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:06 compute-0 python3.9[144776]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:18:06 compute-0 sudo[144774]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:07 compute-0 sudo[144927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaamaemqzycauvqogycctvwpszokidoe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764393487.2962773-149-130172742045535/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 05:18:07 compute-0 sudo[144927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:08 compute-0 python3[144929]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 05:18:08 compute-0 sudo[144927]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:08 compute-0 ceph-mon[75176]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:08 compute-0 sudo[145079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tahyvpgfdqyuwfkhbsnwwnnlvqxswqpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393488.4726708-157-205592488769979/AnsiballZ_stat.py'
Nov 29 05:18:08 compute-0 sudo[145079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:09 compute-0 python3.9[145081]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:09 compute-0 sudo[145079]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:09 compute-0 sudo[145204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjotvhqpokswsqubzyuwcetlxicxenqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393488.4726708-157-205592488769979/AnsiballZ_copy.py'
Nov 29 05:18:09 compute-0 sudo[145204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:09 compute-0 python3.9[145206]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393488.4726708-157-205592488769979/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:09 compute-0 sudo[145204]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:10 compute-0 sudo[145356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haupysppnvzjrgwmvyvqhaqhpftjdeqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393490.1617022-172-256688585185865/AnsiballZ_stat.py'
Nov 29 05:18:10 compute-0 sudo[145356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:10 compute-0 ceph-mon[75176]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:10 compute-0 python3.9[145358]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:10 compute-0 sudo[145356]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:11 compute-0 sudo[145481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffhxqmtfgbsclmxrsniivqnopiscxjej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393490.1617022-172-256688585185865/AnsiballZ_copy.py'
Nov 29 05:18:11 compute-0 sudo[145481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:11 compute-0 python3.9[145483]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393490.1617022-172-256688585185865/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:18:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:18:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:18:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:18:11 compute-0 sudo[145481]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:18:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:18:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:11 compute-0 sudo[145633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-latoegmodtffcpbjxbpfgoejmtmiodyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393491.5893674-187-155179837342542/AnsiballZ_stat.py'
Nov 29 05:18:11 compute-0 sudo[145633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:12 compute-0 python3.9[145635]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:12 compute-0 sudo[145633]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:12 compute-0 ceph-mon[75176]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:12 compute-0 sudo[145758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlftlbjyfyvaieirlbtjnqudlipvsgqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393491.5893674-187-155179837342542/AnsiballZ_copy.py'
Nov 29 05:18:12 compute-0 sudo[145758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:12 compute-0 python3.9[145760]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393491.5893674-187-155179837342542/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:12 compute-0 sudo[145758]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:13 compute-0 sudo[145910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqsxxykcdsacrkzbvcqdkjsbajdskyfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393493.1124704-202-264766161544286/AnsiballZ_stat.py'
Nov 29 05:18:13 compute-0 sudo[145910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:13 compute-0 python3.9[145912]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:13 compute-0 sudo[145910]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:13 compute-0 sudo[146035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwoiretvuavvosyixqtppmgroxwwdcjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393493.1124704-202-264766161544286/AnsiballZ_copy.py'
Nov 29 05:18:13 compute-0 sudo[146035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:14 compute-0 python3.9[146037]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393493.1124704-202-264766161544286/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:14 compute-0 sudo[146035]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:14 compute-0 ceph-mon[75176]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:14 compute-0 sudo[146187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdzmtdycjjkepeambvehqdxpbguassjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393494.3104706-217-75169874709429/AnsiballZ_stat.py'
Nov 29 05:18:14 compute-0 sudo[146187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:14 compute-0 python3.9[146189]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:15 compute-0 sudo[146187]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:15 compute-0 sudo[146312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hobpdnkaibuplctxvnnhfjvhkbwaqdak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393494.3104706-217-75169874709429/AnsiballZ_copy.py'
Nov 29 05:18:15 compute-0 sudo[146312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:15 compute-0 python3.9[146314]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393494.3104706-217-75169874709429/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:15 compute-0 sudo[146312]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:16 compute-0 sudo[146464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elqbkyhizvtxofmlsyspuszzgyngrcmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393495.9021533-232-179200837175086/AnsiballZ_file.py'
Nov 29 05:18:16 compute-0 sudo[146464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:16 compute-0 python3.9[146466]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:16 compute-0 sudo[146464]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:16 compute-0 ceph-mon[75176]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:17 compute-0 sudo[146616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikcejcqseeqfbmgeamvxzcgtlptqpcri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393496.7456934-240-192322109032261/AnsiballZ_command.py'
Nov 29 05:18:17 compute-0 sudo[146616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:17 compute-0 python3.9[146618]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:18:17 compute-0 sudo[146616]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:18 compute-0 sudo[146771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vexlynodwyqukbrbzlulgomfldajvqfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393497.5541744-248-74644603330431/AnsiballZ_blockinfile.py'
Nov 29 05:18:18 compute-0 sudo[146771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:18 compute-0 python3.9[146773]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:18 compute-0 sudo[146771]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:18 compute-0 ceph-mon[75176]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:18 compute-0 sudo[146923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikspbnyvclpcapeskseheqjfgptjfiqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393498.5413527-257-146279788599402/AnsiballZ_command.py'
Nov 29 05:18:18 compute-0 sudo[146923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:19 compute-0 python3.9[146925]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:18:19 compute-0 sudo[146923]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:19 compute-0 sudo[147076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwigdnpsdszuwyulxnshpohkfepsybjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393499.323586-265-57526233656158/AnsiballZ_stat.py'
Nov 29 05:18:19 compute-0 sudo[147076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:19 compute-0 python3.9[147078]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:18:19 compute-0 sudo[147076]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:20 compute-0 sudo[147230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhjxxvavdzcjzpuocwtkzatnubljbade ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393500.1947-273-27885492805919/AnsiballZ_command.py'
Nov 29 05:18:20 compute-0 sudo[147230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:20 compute-0 ceph-mon[75176]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:20 compute-0 python3.9[147232]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:18:20 compute-0 sudo[147230]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:21 compute-0 sudo[147385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iebgzwqqfoqwoksujtwcykmdmekvcsad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393501.0106385-281-29106073410468/AnsiballZ_file.py'
Nov 29 05:18:21 compute-0 sudo[147385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:21 compute-0 python3.9[147387]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:21 compute-0 sudo[147385]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:22 compute-0 ceph-mon[75176]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:22 compute-0 python3.9[147537]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:18:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:23 compute-0 sudo[147690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flcploufivtuupvrpusiztrxyvsstnfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393503.416356-321-206728809021436/AnsiballZ_command.py'
Nov 29 05:18:23 compute-0 sudo[147690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:23 compute-0 python3.9[147692]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:18:23 compute-0 ovs-vsctl[147693]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 05:18:23 compute-0 sudo[147690]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:24 compute-0 sudo[147843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxuikxbqlfkxayjughbibkbdiysktcrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393504.192141-330-69494179114017/AnsiballZ_command.py'
Nov 29 05:18:24 compute-0 sudo[147843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:24 compute-0 ceph-mon[75176]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:24 compute-0 python3.9[147845]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:18:24 compute-0 sshd-session[147563]: Invalid user ubuntu from 61.240.213.113 port 38602
Nov 29 05:18:24 compute-0 sudo[147843]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:24 compute-0 sshd-session[147563]: Received disconnect from 61.240.213.113 port 38602:11:  [preauth]
Nov 29 05:18:24 compute-0 sshd-session[147563]: Disconnected from invalid user ubuntu 61.240.213.113 port 38602 [preauth]
Nov 29 05:18:25 compute-0 sudo[147998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcwtnhhlkjyviklckolkegdekwdiiwgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393504.9140348-338-162363509392050/AnsiballZ_command.py'
Nov 29 05:18:25 compute-0 sudo[147998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:25 compute-0 python3.9[148000]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:18:25 compute-0 ovs-vsctl[148001]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 05:18:25 compute-0 sudo[147998]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:25 compute-0 python3.9[148151]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:18:26 compute-0 sudo[148303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iehlzsuxvasiuznyujppflafqxcdrnzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393506.2011523-355-214859718031248/AnsiballZ_file.py'
Nov 29 05:18:26 compute-0 sudo[148303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:26 compute-0 python3.9[148305]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:18:26 compute-0 ceph-mon[75176]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:26 compute-0 sudo[148303]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:27 compute-0 sudo[148455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekrofczllfajvonnfstccxugqocupeir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393506.8517191-363-275657478205250/AnsiballZ_stat.py'
Nov 29 05:18:27 compute-0 sudo[148455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:27 compute-0 python3.9[148457]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:27 compute-0 sudo[148455]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:27 compute-0 sudo[148533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcvsecekggkqrbdshibmswkznbvtznrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393506.8517191-363-275657478205250/AnsiballZ_file.py'
Nov 29 05:18:27 compute-0 sudo[148533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:27 compute-0 python3.9[148535]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:18:27 compute-0 sudo[148533]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:28 compute-0 sudo[148685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yttxswppiixjgwixwhyriaorlodhziut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393507.9145846-363-123344403725543/AnsiballZ_stat.py'
Nov 29 05:18:28 compute-0 sudo[148685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:28 compute-0 python3.9[148687]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:28 compute-0 sudo[148685]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:28 compute-0 ceph-mon[75176]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:28 compute-0 sudo[148763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxwjuzxkfnmsdguorvrjwsucdjlomgic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393507.9145846-363-123344403725543/AnsiballZ_file.py'
Nov 29 05:18:28 compute-0 sudo[148763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:29 compute-0 python3.9[148765]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:18:29 compute-0 sudo[148763]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:29 compute-0 sudo[148915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-debnmtvuqazuttseaanohbmtzpjxcztj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393509.2210734-386-207422853009248/AnsiballZ_file.py'
Nov 29 05:18:29 compute-0 sudo[148915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:29 compute-0 python3.9[148917]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:29 compute-0 sudo[148915]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:30 compute-0 sudo[149067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtqikjylhsapzatpyvslhrsmwevlenyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393509.955953-394-174396252433544/AnsiballZ_stat.py'
Nov 29 05:18:30 compute-0 sudo[149067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:30 compute-0 python3.9[149069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:30 compute-0 sudo[149067]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:30 compute-0 ceph-mon[75176]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:30 compute-0 sudo[149145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sigazuwvrxnmlyhkbtbiuskueaitbqgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393509.955953-394-174396252433544/AnsiballZ_file.py'
Nov 29 05:18:30 compute-0 sudo[149145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:30 compute-0 python3.9[149147]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:31 compute-0 sudo[149145]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:31 compute-0 sudo[149297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxajrcpzokcqgiquhtbxxkdltzrjmnfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393511.2202163-406-54737601814603/AnsiballZ_stat.py'
Nov 29 05:18:31 compute-0 sudo[149297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:31 compute-0 python3.9[149299]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:31 compute-0 sudo[149297]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:32 compute-0 sudo[149375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayeyitlhwdvfewktplhdizdtnnlcrghl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393511.2202163-406-54737601814603/AnsiballZ_file.py'
Nov 29 05:18:32 compute-0 sudo[149375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:32 compute-0 python3.9[149377]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:32 compute-0 sudo[149375]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:32 compute-0 ceph-mon[75176]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.709082) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512709193, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 687, "num_deletes": 251, "total_data_size": 864928, "memory_usage": 877680, "flush_reason": "Manual Compaction"}
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512719212, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 857440, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9010, "largest_seqno": 9696, "table_properties": {"data_size": 853828, "index_size": 1456, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7670, "raw_average_key_size": 18, "raw_value_size": 846678, "raw_average_value_size": 2025, "num_data_blocks": 67, "num_entries": 418, "num_filter_entries": 418, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764393454, "oldest_key_time": 1764393454, "file_creation_time": 1764393512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 10249 microseconds, and 6324 cpu microseconds.
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.719340) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 857440 bytes OK
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.719370) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.721132) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.721161) EVENT_LOG_v1 {"time_micros": 1764393512721151, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.721188) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 861356, prev total WAL file size 861356, number of live WAL files 2.
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.721988) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(837KB)], [23(6789KB)]
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512722017, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7809783, "oldest_snapshot_seqno": -1}
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3289 keys, 6072600 bytes, temperature: kUnknown
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512764109, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6072600, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6048762, "index_size": 14513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79741, "raw_average_key_size": 24, "raw_value_size": 5987396, "raw_average_value_size": 1820, "num_data_blocks": 633, "num_entries": 3289, "num_filter_entries": 3289, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764393512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.764562) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6072600 bytes
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.765596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.1 rd, 143.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 6.6 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(16.2) write-amplify(7.1) OK, records in: 3802, records dropped: 513 output_compression: NoCompression
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.765613) EVENT_LOG_v1 {"time_micros": 1764393512765604, "job": 8, "event": "compaction_finished", "compaction_time_micros": 42414, "compaction_time_cpu_micros": 13563, "output_level": 6, "num_output_files": 1, "total_output_size": 6072600, "num_input_records": 3802, "num_output_records": 3289, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512765987, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393512767323, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.721945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.767369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.767373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.767375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.767376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:18:32 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:18:32.767378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:18:32 compute-0 sudo[149527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxwviwzibvxgqbetrjpmoiexmekbaglv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393512.4762712-418-193654306051366/AnsiballZ_systemd.py'
Nov 29 05:18:32 compute-0 sudo[149527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:33 compute-0 python3.9[149529]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:18:33 compute-0 systemd[1]: Reloading.
Nov 29 05:18:33 compute-0 systemd-sysv-generator[149555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:18:33 compute-0 systemd-rc-local-generator[149548]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:18:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:33 compute-0 sudo[149527]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:34 compute-0 sudo[149717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xihdxkbgehzqhjokiwucytlsvixvjlgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393513.7781768-426-150082521219038/AnsiballZ_stat.py'
Nov 29 05:18:34 compute-0 sudo[149717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:34 compute-0 python3.9[149719]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:34 compute-0 sudo[149717]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:34 compute-0 sudo[149795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwwcncknncrnjhynyaaokgadsmirnzik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393513.7781768-426-150082521219038/AnsiballZ_file.py'
Nov 29 05:18:34 compute-0 sudo[149795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:34 compute-0 ceph-mon[75176]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:34 compute-0 python3.9[149797]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:34 compute-0 sudo[149795]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:35 compute-0 sudo[149947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-decpghhwjgsychcdiavfgfyfoomeowpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393515.0576003-438-140366575471312/AnsiballZ_stat.py'
Nov 29 05:18:35 compute-0 sudo[149947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:35 compute-0 python3.9[149949]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:35 compute-0 sudo[149947]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:35 compute-0 sudo[150025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jursthbohqpqtqphkhclcokqinmdewcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393515.0576003-438-140366575471312/AnsiballZ_file.py'
Nov 29 05:18:35 compute-0 sudo[150025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:36 compute-0 python3.9[150027]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:36 compute-0 sudo[150025]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:36 compute-0 sudo[150075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:18:36 compute-0 sudo[150075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:36 compute-0 sudo[150075]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:36 compute-0 sudo[150123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:18:36 compute-0 sudo[150123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:36 compute-0 sudo[150123]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:36 compute-0 sudo[150167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:18:36 compute-0 sudo[150167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:36 compute-0 sudo[150167]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:36 compute-0 sudo[150202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:18:36 compute-0 sudo[150202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:36 compute-0 ceph-mon[75176]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:36 compute-0 sudo[150277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeegdobibunemcuiappceimldoogsiog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393516.2921655-450-13684421395700/AnsiballZ_systemd.py'
Nov 29 05:18:36 compute-0 sudo[150277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:37 compute-0 python3.9[150279]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:18:37 compute-0 systemd[1]: Reloading.
Nov 29 05:18:37 compute-0 systemd-rc-local-generator[150328]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:18:37 compute-0 systemd-sysv-generator[150331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:18:37 compute-0 sudo[150202]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:18:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:18:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:18:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:18:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:18:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:18:37 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ac7bbc08-296f-434c-901e-a58713aa0beb does not exist
Nov 29 05:18:37 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev bb3e150f-20d7-44cb-8d3d-14a9f5e696f8 does not exist
Nov 29 05:18:37 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev e5047bf7-0b2c-40d9-9ed8-82d0e261e5a1 does not exist
Nov 29 05:18:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:18:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:18:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:18:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:18:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:18:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:18:37 compute-0 sudo[150348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:18:37 compute-0 sudo[150348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:37 compute-0 sudo[150348]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:37 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 05:18:37 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 05:18:37 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 05:18:37 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 05:18:37 compute-0 sudo[150375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:18:37 compute-0 sudo[150375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:37 compute-0 sudo[150375]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:37 compute-0 sudo[150277]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:37 compute-0 sudo[150405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:18:37 compute-0 sudo[150405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:37 compute-0 sudo[150405]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:37 compute-0 sudo[150437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:18:37 compute-0 sudo[150437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:18:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:18:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:18:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:18:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:18:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:18:37 compute-0 podman[150571]: 2025-11-29 05:18:37.855979693 +0000 UTC m=+0.057591629 container create ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:18:37 compute-0 systemd[1]: Started libpod-conmon-ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33.scope.
Nov 29 05:18:37 compute-0 podman[150571]: 2025-11-29 05:18:37.836878271 +0000 UTC m=+0.038490227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:18:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:18:37 compute-0 podman[150571]: 2025-11-29 05:18:37.953596807 +0000 UTC m=+0.155208783 container init ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 05:18:37 compute-0 podman[150571]: 2025-11-29 05:18:37.962341906 +0000 UTC m=+0.163953832 container start ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:18:37 compute-0 optimistic_bartik[150630]: 167 167
Nov 29 05:18:37 compute-0 systemd[1]: libpod-ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33.scope: Deactivated successfully.
Nov 29 05:18:37 compute-0 podman[150571]: 2025-11-29 05:18:37.970311691 +0000 UTC m=+0.171923657 container attach ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:18:37 compute-0 podman[150571]: 2025-11-29 05:18:37.97089833 +0000 UTC m=+0.172510266 container died ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:18:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6aeda8a49386d030689c75b424084b5f9bbb1e5ba91061b535cb6811b8c2f69b-merged.mount: Deactivated successfully.
Nov 29 05:18:38 compute-0 podman[150571]: 2025-11-29 05:18:38.013794611 +0000 UTC m=+0.215406537 container remove ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 05:18:38 compute-0 sudo[150675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egrlzmpxyxlcylbkrhqpnelybyhshqyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393517.7091963-460-239415188645766/AnsiballZ_file.py'
Nov 29 05:18:38 compute-0 systemd[1]: libpod-conmon-ccdb51fdaa3acbf5de4cedf92b890f36879af3165adb5e499818a9193ea72a33.scope: Deactivated successfully.
Nov 29 05:18:38 compute-0 sudo[150675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:38 compute-0 podman[150686]: 2025-11-29 05:18:38.1492986 +0000 UTC m=+0.041391663 container create 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:18:38 compute-0 systemd[1]: Started libpod-conmon-3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133.scope.
Nov 29 05:18:38 compute-0 python3.9[150680]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:18:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:18:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:38 compute-0 podman[150686]: 2025-11-29 05:18:38.132077039 +0000 UTC m=+0.024170122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:18:38 compute-0 sudo[150675]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:38 compute-0 podman[150686]: 2025-11-29 05:18:38.254462623 +0000 UTC m=+0.146555706 container init 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:18:38 compute-0 podman[150686]: 2025-11-29 05:18:38.273118971 +0000 UTC m=+0.165212034 container start 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:18:38 compute-0 podman[150686]: 2025-11-29 05:18:38.277078092 +0000 UTC m=+0.169171165 container attach 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:18:38 compute-0 ceph-mon[75176]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:38 compute-0 sudo[150857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbpsfzlcsttnulvvenlwdgjfjbkkbyax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393518.4482346-468-115034954856512/AnsiballZ_stat.py'
Nov 29 05:18:38 compute-0 sudo[150857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:39 compute-0 python3.9[150859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:39 compute-0 sudo[150857]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:39 compute-0 adoring_ptolemy[150703]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:18:39 compute-0 adoring_ptolemy[150703]: --> relative data size: 1.0
Nov 29 05:18:39 compute-0 adoring_ptolemy[150703]: --> All data devices are unavailable
Nov 29 05:18:39 compute-0 systemd[1]: libpod-3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133.scope: Deactivated successfully.
Nov 29 05:18:39 compute-0 systemd[1]: libpod-3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133.scope: Consumed 1.084s CPU time.
Nov 29 05:18:39 compute-0 podman[150686]: 2025-11-29 05:18:39.432543076 +0000 UTC m=+1.324636209 container died 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 05:18:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-38dfb1a0b8fc44621363fe8058589ce83c176bda7cdf4994e1237b12a5f1ba11-merged.mount: Deactivated successfully.
Nov 29 05:18:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:39 compute-0 podman[150686]: 2025-11-29 05:18:39.511982958 +0000 UTC m=+1.404076031 container remove 3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:18:39 compute-0 systemd[1]: libpod-conmon-3e85aa11dece7692f39680e682882c839d19a4964abb00aed481db5d64250133.scope: Deactivated successfully.
Nov 29 05:18:39 compute-0 sudo[151018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlvuzacbbswjyjbzdlwusvnsktqnufkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393518.4482346-468-115034954856512/AnsiballZ_copy.py'
Nov 29 05:18:39 compute-0 sudo[150437]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:39 compute-0 sudo[151018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:39 compute-0 sudo[151021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:18:39 compute-0 sudo[151021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:39 compute-0 sudo[151021]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:39 compute-0 sudo[151046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:18:39 compute-0 sudo[151046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:39 compute-0 sudo[151046]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:39 compute-0 python3.9[151020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393518.4482346-468-115034954856512/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:18:39 compute-0 sudo[151018]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:39 compute-0 sudo[151071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:18:39 compute-0 sudo[151071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:39 compute-0 sudo[151071]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:39 compute-0 sudo[151113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:18:39 compute-0 sudo[151113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:40 compute-0 podman[151212]: 2025-11-29 05:18:40.352703657 +0000 UTC m=+0.074025943 container create 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:18:40 compute-0 systemd[1]: Started libpod-conmon-68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18.scope.
Nov 29 05:18:40 compute-0 podman[151212]: 2025-11-29 05:18:40.317133248 +0000 UTC m=+0.038455574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:18:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:18:40 compute-0 podman[151212]: 2025-11-29 05:18:40.446845985 +0000 UTC m=+0.168168341 container init 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:18:40 compute-0 podman[151212]: 2025-11-29 05:18:40.454153057 +0000 UTC m=+0.175475303 container start 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:18:40 compute-0 podman[151212]: 2025-11-29 05:18:40.457393804 +0000 UTC m=+0.178716090 container attach 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:18:40 compute-0 nervous_moser[151276]: 167 167
Nov 29 05:18:40 compute-0 systemd[1]: libpod-68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18.scope: Deactivated successfully.
Nov 29 05:18:40 compute-0 podman[151212]: 2025-11-29 05:18:40.463157305 +0000 UTC m=+0.184479591 container died 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 05:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-463ea3f5cc693eafa5649770f8a6ef0eac347396a8cd8ce70f652a725353f8e4-merged.mount: Deactivated successfully.
Nov 29 05:18:40 compute-0 podman[151212]: 2025-11-29 05:18:40.50614995 +0000 UTC m=+0.227472196 container remove 68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 05:18:40 compute-0 systemd[1]: libpod-conmon-68c443a1c42400b1bed0a3a4a4a8ef5b0e2a7c51f3623ba22675174b4e237b18.scope: Deactivated successfully.
Nov 29 05:18:40 compute-0 sudo[151348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzlxhdbedwsayfnpqsxabaxslxnfdcje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393520.231758-485-241153228008355/AnsiballZ_file.py'
Nov 29 05:18:40 compute-0 sudo[151348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:40 compute-0 podman[151356]: 2025-11-29 05:18:40.730629905 +0000 UTC m=+0.069261075 container create f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 05:18:40 compute-0 ceph-mon[75176]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:40 compute-0 systemd[1]: Started libpod-conmon-f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4.scope.
Nov 29 05:18:40 compute-0 python3.9[151350]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:18:40 compute-0 podman[151356]: 2025-11-29 05:18:40.702874897 +0000 UTC m=+0.041506157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:18:40 compute-0 sudo[151348]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa6dc9b0c0d292347dd1a4d2d84d372191cec2376c5d79606cc2684caac7cf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa6dc9b0c0d292347dd1a4d2d84d372191cec2376c5d79606cc2684caac7cf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa6dc9b0c0d292347dd1a4d2d84d372191cec2376c5d79606cc2684caac7cf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa6dc9b0c0d292347dd1a4d2d84d372191cec2376c5d79606cc2684caac7cf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:40 compute-0 podman[151356]: 2025-11-29 05:18:40.851884302 +0000 UTC m=+0.190515552 container init f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 05:18:40 compute-0 podman[151356]: 2025-11-29 05:18:40.864687256 +0000 UTC m=+0.203318456 container start f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:18:40 compute-0 podman[151356]: 2025-11-29 05:18:40.868346228 +0000 UTC m=+0.206977498 container attach f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:18:41
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'volumes']
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:18:41 compute-0 sudo[151527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmxkzrdncwakzaoydpfncynpieublnxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393521.1273267-493-226782149232541/AnsiballZ_stat.py'
Nov 29 05:18:41 compute-0 sudo[151527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:41 compute-0 python3.9[151529]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:18:41 compute-0 sudo[151527]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:41 compute-0 nervous_gauss[151373]: {
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:     "0": [
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:         {
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "devices": [
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "/dev/loop3"
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             ],
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_name": "ceph_lv0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_size": "21470642176",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "name": "ceph_lv0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "tags": {
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.cluster_name": "ceph",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.crush_device_class": "",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.encrypted": "0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.osd_id": "0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.type": "block",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.vdo": "0"
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             },
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "type": "block",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "vg_name": "ceph_vg0"
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:         }
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:     ],
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:     "1": [
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:         {
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "devices": [
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "/dev/loop4"
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             ],
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_name": "ceph_lv1",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_size": "21470642176",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "name": "ceph_lv1",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "tags": {
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.cluster_name": "ceph",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.crush_device_class": "",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.encrypted": "0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.osd_id": "1",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.type": "block",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.vdo": "0"
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             },
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "type": "block",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "vg_name": "ceph_vg1"
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:         }
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:     ],
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:     "2": [
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:         {
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "devices": [
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "/dev/loop5"
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             ],
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_name": "ceph_lv2",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_size": "21470642176",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "name": "ceph_lv2",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "tags": {
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.cluster_name": "ceph",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.crush_device_class": "",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.encrypted": "0",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.osd_id": "2",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.type": "block",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:                 "ceph.vdo": "0"
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             },
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "type": "block",
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:             "vg_name": "ceph_vg2"
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:         }
Nov 29 05:18:41 compute-0 nervous_gauss[151373]:     ]
Nov 29 05:18:41 compute-0 nervous_gauss[151373]: }
Nov 29 05:18:41 compute-0 systemd[1]: libpod-f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4.scope: Deactivated successfully.
Nov 29 05:18:41 compute-0 podman[151356]: 2025-11-29 05:18:41.655685568 +0000 UTC m=+0.994316738 container died f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 05:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fa6dc9b0c0d292347dd1a4d2d84d372191cec2376c5d79606cc2684caac7cf9-merged.mount: Deactivated successfully.
Nov 29 05:18:41 compute-0 podman[151356]: 2025-11-29 05:18:41.705605092 +0000 UTC m=+1.044236252 container remove f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gauss, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:18:41 compute-0 systemd[1]: libpod-conmon-f426ee92d90db631a09e35e4ae68504b816f8ab6fd39fa7f427045f727a971d4.scope: Deactivated successfully.
Nov 29 05:18:41 compute-0 sudo[151113]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:41 compute-0 sudo[151592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:18:41 compute-0 sudo[151592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:41 compute-0 sudo[151592]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:41 compute-0 sudo[151640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:18:41 compute-0 sudo[151640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:41 compute-0 sudo[151640]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:41 compute-0 sudo[151685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:18:41 compute-0 sudo[151685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:41 compute-0 sudo[151685]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:41 compute-0 sudo[151745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtkjqmupfzyscwtynwqtsdyhqpyunjwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393521.1273267-493-226782149232541/AnsiballZ_copy.py'
Nov 29 05:18:41 compute-0 sudo[151745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:41 compute-0 sudo[151737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:18:41 compute-0 sudo[151737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:42 compute-0 python3.9[151763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393521.1273267-493-226782149232541/.source.json _original_basename=.3nqn71o2 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:42 compute-0 sudo[151745]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:42 compute-0 podman[151826]: 2025-11-29 05:18:42.243782428 +0000 UTC m=+0.042882801 container create 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:18:42 compute-0 systemd[1]: Started libpod-conmon-7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e.scope.
Nov 29 05:18:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:18:42 compute-0 podman[151826]: 2025-11-29 05:18:42.31537797 +0000 UTC m=+0.114478363 container init 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 05:18:42 compute-0 podman[151826]: 2025-11-29 05:18:42.224156758 +0000 UTC m=+0.023257121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:18:42 compute-0 podman[151826]: 2025-11-29 05:18:42.32231682 +0000 UTC m=+0.121417163 container start 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:18:42 compute-0 podman[151826]: 2025-11-29 05:18:42.326138526 +0000 UTC m=+0.125238909 container attach 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:18:42 compute-0 jolly_gagarin[151847]: 167 167
Nov 29 05:18:42 compute-0 systemd[1]: libpod-7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e.scope: Deactivated successfully.
Nov 29 05:18:42 compute-0 podman[151826]: 2025-11-29 05:18:42.3283568 +0000 UTC m=+0.127457153 container died 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:18:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f62424a6029c7d74f5ac2880dfdf518504b46d7902125277d1cbd87c510d9245-merged.mount: Deactivated successfully.
Nov 29 05:18:42 compute-0 podman[151826]: 2025-11-29 05:18:42.361883331 +0000 UTC m=+0.160983684 container remove 7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:18:42 compute-0 systemd[1]: libpod-conmon-7af92c7b8543a5e747dd171d51af0b3fb5f560c8088fdd7200dc01bf119bf85e.scope: Deactivated successfully.
Nov 29 05:18:42 compute-0 podman[151946]: 2025-11-29 05:18:42.546494685 +0000 UTC m=+0.051193916 container create bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 05:18:42 compute-0 systemd[1]: Started libpod-conmon-bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb.scope.
Nov 29 05:18:42 compute-0 podman[151946]: 2025-11-29 05:18:42.526636718 +0000 UTC m=+0.031335989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:18:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4512b4b7ad57ef1793b9b47ffe868cbbce09cc8e8c815a5c7fff7a188a49f6db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4512b4b7ad57ef1793b9b47ffe868cbbce09cc8e8c815a5c7fff7a188a49f6db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4512b4b7ad57ef1793b9b47ffe868cbbce09cc8e8c815a5c7fff7a188a49f6db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4512b4b7ad57ef1793b9b47ffe868cbbce09cc8e8c815a5c7fff7a188a49f6db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:42 compute-0 podman[151946]: 2025-11-29 05:18:42.665171257 +0000 UTC m=+0.169870678 container init bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:18:42 compute-0 podman[151946]: 2025-11-29 05:18:42.676227503 +0000 UTC m=+0.180926744 container start bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 05:18:42 compute-0 podman[151946]: 2025-11-29 05:18:42.681937492 +0000 UTC m=+0.186636703 container attach bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:18:42 compute-0 sudo[152017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsovpyenhsnaktygfnofhxyygfyiwgon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393522.3278828-508-272424236171210/AnsiballZ_file.py'
Nov 29 05:18:42 compute-0 sudo[152017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:42 compute-0 ceph-mon[75176]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:42 compute-0 python3.9[152019]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:42 compute-0 sudo[152017]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:43 compute-0 sudo[152180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avvhucfytdnemxczvckziufgbtvrehch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393523.1466315-516-152046891590143/AnsiballZ_stat.py'
Nov 29 05:18:43 compute-0 sudo[152180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]: {
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "osd_id": 0,
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "type": "bluestore"
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:     },
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "osd_id": 1,
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "type": "bluestore"
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:     },
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "osd_id": 2,
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:         "type": "bluestore"
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]:     }
Nov 29 05:18:43 compute-0 sharp_mclaren[151986]: }
Nov 29 05:18:43 compute-0 sudo[152180]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:43 compute-0 systemd[1]: libpod-bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb.scope: Deactivated successfully.
Nov 29 05:18:43 compute-0 podman[151946]: 2025-11-29 05:18:43.720193225 +0000 UTC m=+1.224892456 container died bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:18:43 compute-0 systemd[1]: libpod-bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb.scope: Consumed 1.046s CPU time.
Nov 29 05:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4512b4b7ad57ef1793b9b47ffe868cbbce09cc8e8c815a5c7fff7a188a49f6db-merged.mount: Deactivated successfully.
Nov 29 05:18:43 compute-0 podman[151946]: 2025-11-29 05:18:43.794469095 +0000 UTC m=+1.299168286 container remove bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:18:43 compute-0 systemd[1]: libpod-conmon-bd6acaa461f0024355ee313d26e65dfb4aed3216263e1c5aa3cf4c92733c8beb.scope: Deactivated successfully.
Nov 29 05:18:43 compute-0 sudo[151737]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:18:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:18:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:18:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:18:43 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev dbce3264-756f-423a-9bc3-c6b60d298ae1 does not exist
Nov 29 05:18:43 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ba05a742-b0bd-4433-ab18-af51642d7e1e does not exist
Nov 29 05:18:43 compute-0 sudo[152261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:18:43 compute-0 sudo[152261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:43 compute-0 sudo[152261]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:43 compute-0 sudo[152309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:18:43 compute-0 sudo[152309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:18:43 compute-0 sudo[152309]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:44 compute-0 sudo[152384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcqzsqdlufsnfewdjvlczscipmmthcbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393523.1466315-516-152046891590143/AnsiballZ_copy.py'
Nov 29 05:18:44 compute-0 sudo[152384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:44 compute-0 sudo[152384]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:44 compute-0 ceph-mon[75176]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:18:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:18:45 compute-0 sudo[152536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmiqagmdkublbexrvkegrrcpzrkpgnjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393524.604979-533-164186750088819/AnsiballZ_container_config_data.py'
Nov 29 05:18:45 compute-0 sudo[152536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:45 compute-0 python3.9[152538]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 05:18:45 compute-0 sudo[152536]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:46 compute-0 sudo[152688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-datcataxgaxybdtwqbyyndbveqsxfvlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393525.6159718-542-177083142500840/AnsiballZ_container_config_hash.py'
Nov 29 05:18:46 compute-0 sudo[152688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:46 compute-0 python3.9[152690]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 05:18:46 compute-0 sudo[152688]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:46 compute-0 ceph-mon[75176]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:47 compute-0 sudo[152840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzvkavqhykudmhqujpsljwkpjozmxbsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393526.8847592-551-236144603142802/AnsiballZ_podman_container_info.py'
Nov 29 05:18:47 compute-0 sudo[152840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:47 compute-0 python3.9[152842]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 05:18:47 compute-0 sudo[152840]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:48 compute-0 ceph-mon[75176]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:49 compute-0 sudo[153018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbjrpdcngdsrbkhulglxldtsopqbbpwj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764393528.6840317-564-120482482045279/AnsiballZ_edpm_container_manage.py'
Nov 29 05:18:49 compute-0 sudo[153018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:49 compute-0 python3[153020]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 05:18:50 compute-0 ceph-mon[75176]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:18:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:52 compute-0 ceph-mon[75176]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:54 compute-0 sshd-session[153099]: Invalid user admin1 from 152.32.145.111 port 47900
Nov 29 05:18:54 compute-0 sshd-session[153099]: Received disconnect from 152.32.145.111 port 47900:11: Bye Bye [preauth]
Nov 29 05:18:54 compute-0 sshd-session[153099]: Disconnected from invalid user admin1 152.32.145.111 port 47900 [preauth]
Nov 29 05:18:54 compute-0 podman[153033]: 2025-11-29 05:18:54.293975651 +0000 UTC m=+4.664847993 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 05:18:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:54 compute-0 podman[153154]: 2025-11-29 05:18:54.43583164 +0000 UTC m=+0.061132826 container create 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 05:18:54 compute-0 podman[153154]: 2025-11-29 05:18:54.395578386 +0000 UTC m=+0.020879652 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 05:18:54 compute-0 python3[153020]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 05:18:54 compute-0 sudo[153018]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:54 compute-0 ceph-mon[75176]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:55 compute-0 sudo[153342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjoauzwgggrvtmlpndymbuchjrjcoarm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393534.7759268-572-130192536842401/AnsiballZ_stat.py'
Nov 29 05:18:55 compute-0 sudo[153342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:55 compute-0 python3.9[153344]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:18:55 compute-0 sudo[153342]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:56 compute-0 sudo[153497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztqquemthjsphborxrdumthchxtffpjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393535.6263638-581-183370772364965/AnsiballZ_file.py'
Nov 29 05:18:56 compute-0 sudo[153497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:56 compute-0 python3.9[153500]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:56 compute-0 sudo[153497]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:56 compute-0 sudo[153574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmcksbibfzrzxxlrwqgycjxljbsgszhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393535.6263638-581-183370772364965/AnsiballZ_stat.py'
Nov 29 05:18:56 compute-0 sudo[153574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:56 compute-0 python3.9[153576]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:18:56 compute-0 sudo[153574]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:57 compute-0 ceph-mon[75176]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:57 compute-0 sudo[153725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znravurdbwqcywbzvwazmcimnaajmvxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393536.9099998-581-232314051282663/AnsiballZ_copy.py'
Nov 29 05:18:57 compute-0 sudo[153725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:57 compute-0 python3.9[153727]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393536.9099998-581-232314051282663/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:18:57 compute-0 sudo[153725]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:58 compute-0 sudo[153801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uduxkrjrdlxwismomfgvtzqsqcolrrty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393536.9099998-581-232314051282663/AnsiballZ_systemd.py'
Nov 29 05:18:58 compute-0 sudo[153801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:58 compute-0 ceph-mon[75176]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:58 compute-0 python3.9[153803]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 05:18:58 compute-0 systemd[1]: Reloading.
Nov 29 05:18:58 compute-0 systemd-rc-local-generator[153829]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:18:58 compute-0 systemd-sysv-generator[153833]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:18:58 compute-0 sudo[153801]: pam_unix(sudo:session): session closed for user root
Nov 29 05:18:59 compute-0 sudo[153911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrrpqsyhxerawntzcjeqmrwrbtpfpexc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393536.9099998-581-232314051282663/AnsiballZ_systemd.py'
Nov 29 05:18:59 compute-0 sudo[153911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:18:59 compute-0 python3.9[153913]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:18:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:18:59 compute-0 systemd[1]: Reloading.
Nov 29 05:18:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:18:59 compute-0 systemd-rc-local-generator[153943]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:18:59 compute-0 systemd-sysv-generator[153947]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:18:59 compute-0 systemd[1]: Starting ovn_controller container...
Nov 29 05:18:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8d2e56174bec8578d838178f3d2f095f95316cff89a2a6012a0d38c75eb2b65/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 05:18:59 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53.
Nov 29 05:18:59 compute-0 podman[153954]: 2025-11-29 05:18:59.896735713 +0000 UTC m=+0.139743002 container init 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 05:18:59 compute-0 ovn_controller[153970]: + sudo -E kolla_set_configs
Nov 29 05:18:59 compute-0 podman[153954]: 2025-11-29 05:18:59.932792194 +0000 UTC m=+0.175799453 container start 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 05:18:59 compute-0 edpm-start-podman-container[153954]: ovn_controller
Nov 29 05:18:59 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 29 05:18:59 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 05:18:59 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 05:18:59 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 29 05:18:59 compute-0 systemd[153995]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 29 05:19:00 compute-0 edpm-start-podman-container[153953]: Creating additional drop-in dependency for "ovn_controller" (7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53)
Nov 29 05:19:00 compute-0 podman[153976]: 2025-11-29 05:19:00.035609813 +0000 UTC m=+0.084717283 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:19:00 compute-0 systemd[1]: 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53-42826d82be438151.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 05:19:00 compute-0 systemd[1]: 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53-42826d82be438151.service: Failed with result 'exit-code'.
Nov 29 05:19:00 compute-0 systemd[1]: Reloading.
Nov 29 05:19:00 compute-0 systemd-rc-local-generator[154057]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:19:00 compute-0 systemd[153995]: Queued start job for default target Main User Target.
Nov 29 05:19:00 compute-0 systemd-sysv-generator[154061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:19:00 compute-0 systemd[153995]: Created slice User Application Slice.
Nov 29 05:19:00 compute-0 systemd[153995]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 05:19:00 compute-0 systemd[153995]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 05:19:00 compute-0 systemd[153995]: Reached target Paths.
Nov 29 05:19:00 compute-0 systemd[153995]: Reached target Timers.
Nov 29 05:19:00 compute-0 systemd[153995]: Starting D-Bus User Message Bus Socket...
Nov 29 05:19:00 compute-0 systemd[153995]: Starting Create User's Volatile Files and Directories...
Nov 29 05:19:00 compute-0 systemd[153995]: Listening on D-Bus User Message Bus Socket.
Nov 29 05:19:00 compute-0 systemd[153995]: Reached target Sockets.
Nov 29 05:19:00 compute-0 systemd[153995]: Finished Create User's Volatile Files and Directories.
Nov 29 05:19:00 compute-0 systemd[153995]: Reached target Basic System.
Nov 29 05:19:00 compute-0 systemd[153995]: Reached target Main User Target.
Nov 29 05:19:00 compute-0 systemd[153995]: Startup finished in 151ms.
Nov 29 05:19:00 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 29 05:19:00 compute-0 systemd[1]: Started Session c1 of User root.
Nov 29 05:19:00 compute-0 systemd[1]: Started ovn_controller container.
Nov 29 05:19:00 compute-0 sudo[153911]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:00 compute-0 ovn_controller[153970]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 05:19:00 compute-0 ovn_controller[153970]: INFO:__main__:Validating config file
Nov 29 05:19:00 compute-0 ovn_controller[153970]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 05:19:00 compute-0 ovn_controller[153970]: INFO:__main__:Writing out command to execute
Nov 29 05:19:00 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 05:19:00 compute-0 ovn_controller[153970]: ++ cat /run_command
Nov 29 05:19:00 compute-0 ovn_controller[153970]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 05:19:00 compute-0 ovn_controller[153970]: + ARGS=
Nov 29 05:19:00 compute-0 ovn_controller[153970]: + sudo kolla_copy_cacerts
Nov 29 05:19:00 compute-0 systemd[1]: Started Session c2 of User root.
Nov 29 05:19:00 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 05:19:00 compute-0 ovn_controller[153970]: + [[ ! -n '' ]]
Nov 29 05:19:00 compute-0 ovn_controller[153970]: + . kolla_extend_start
Nov 29 05:19:00 compute-0 ovn_controller[153970]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 05:19:00 compute-0 ovn_controller[153970]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 05:19:00 compute-0 ovn_controller[153970]: + umask 0022
Nov 29 05:19:00 compute-0 ovn_controller[153970]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 05:19:00 compute-0 NetworkManager[49073]: <info>  [1764393540.5151] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 05:19:00 compute-0 NetworkManager[49073]: <info>  [1764393540.5163] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 05:19:00 compute-0 NetworkManager[49073]: <info>  [1764393540.5179] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 05:19:00 compute-0 NetworkManager[49073]: <info>  [1764393540.5187] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 05:19:00 compute-0 NetworkManager[49073]: <info>  [1764393540.5194] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 05:19:00 compute-0 kernel: br-int: entered promiscuous mode
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 05:19:00 compute-0 ovn_controller[153970]: 2025-11-29T05:19:00Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 05:19:00 compute-0 NetworkManager[49073]: <info>  [1764393540.5393] manager: (ovn-1193e5-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 05:19:00 compute-0 ceph-mon[75176]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:00 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 05:19:00 compute-0 systemd-udevd[154105]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 05:19:00 compute-0 systemd-udevd[154106]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 05:19:00 compute-0 NetworkManager[49073]: <info>  [1764393540.5650] device (genev_sys_6081): carrier: link connected
Nov 29 05:19:00 compute-0 NetworkManager[49073]: <info>  [1764393540.5656] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 05:19:01 compute-0 sudo[154235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjtrcvmgkogcjtwlzgxffbqgyyvbbdfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393540.7243414-609-70473165637858/AnsiballZ_command.py'
Nov 29 05:19:01 compute-0 sudo[154235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:01 compute-0 python3.9[154237]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:19:01 compute-0 ovs-vsctl[154238]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 05:19:01 compute-0 sudo[154235]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:01 compute-0 sudo[154388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sucfpjxxwblxxyzvhdqwfzathbxmwuom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393541.7010205-617-177947370791985/AnsiballZ_command.py'
Nov 29 05:19:01 compute-0 sudo[154388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:02 compute-0 python3.9[154390]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:19:02 compute-0 ovs-vsctl[154392]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 05:19:02 compute-0 sudo[154388]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:02 compute-0 sudo[154543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keceygngbhnpagwbknszlbsrucxfkedh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393542.738333-631-30221126287879/AnsiballZ_command.py'
Nov 29 05:19:02 compute-0 sudo[154543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:03 compute-0 ceph-mon[75176]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:03 compute-0 python3.9[154545]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:19:03 compute-0 ovs-vsctl[154546]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 05:19:03 compute-0 sudo[154543]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:03 compute-0 sshd-session[142471]: Connection closed by 192.168.122.30 port 56022
Nov 29 05:19:03 compute-0 sshd-session[142468]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:19:03 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 05:19:03 compute-0 systemd[1]: session-45.scope: Consumed 1min 516ms CPU time.
Nov 29 05:19:03 compute-0 systemd-logind[793]: Session 45 logged out. Waiting for processes to exit.
Nov 29 05:19:03 compute-0 systemd-logind[793]: Removed session 45.
Nov 29 05:19:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:05 compute-0 ceph-mon[75176]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:07 compute-0 ceph-mon[75176]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:09 compute-0 ceph-mon[75176]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:09 compute-0 sshd-session[154571]: Accepted publickey for zuul from 192.168.122.30 port 50240 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:19:09 compute-0 systemd-logind[793]: New session 47 of user zuul.
Nov 29 05:19:09 compute-0 systemd[1]: Started Session 47 of User zuul.
Nov 29 05:19:09 compute-0 sshd-session[154571]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:19:10 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 29 05:19:10 compute-0 systemd[153995]: Activating special unit Exit the Session...
Nov 29 05:19:10 compute-0 systemd[153995]: Stopped target Main User Target.
Nov 29 05:19:10 compute-0 systemd[153995]: Stopped target Basic System.
Nov 29 05:19:10 compute-0 systemd[153995]: Stopped target Paths.
Nov 29 05:19:10 compute-0 systemd[153995]: Stopped target Sockets.
Nov 29 05:19:10 compute-0 systemd[153995]: Stopped target Timers.
Nov 29 05:19:10 compute-0 systemd[153995]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 05:19:10 compute-0 systemd[153995]: Closed D-Bus User Message Bus Socket.
Nov 29 05:19:10 compute-0 systemd[153995]: Stopped Create User's Volatile Files and Directories.
Nov 29 05:19:10 compute-0 systemd[153995]: Removed slice User Application Slice.
Nov 29 05:19:10 compute-0 systemd[153995]: Reached target Shutdown.
Nov 29 05:19:10 compute-0 systemd[153995]: Finished Exit the Session.
Nov 29 05:19:10 compute-0 systemd[153995]: Reached target Exit the Session.
Nov 29 05:19:10 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 05:19:10 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 29 05:19:10 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 05:19:10 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 05:19:10 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 05:19:10 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 05:19:10 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 05:19:10 compute-0 python3.9[154726]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:19:11 compute-0 ceph-mon[75176]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:19:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:19:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:19:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:19:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:19:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:19:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:11 compute-0 sudo[154880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bobtqbizggsvcewxgivlihjngeeagsen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393551.409617-34-228890043651156/AnsiballZ_file.py'
Nov 29 05:19:11 compute-0 sudo[154880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:12 compute-0 python3.9[154882]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:12 compute-0 sudo[154880]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:12 compute-0 ceph-mon[75176]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:12 compute-0 sudo[155032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqnydjkqegafbbmrgkzdycpuagyehget ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393552.2923756-34-210110551229937/AnsiballZ_file.py'
Nov 29 05:19:12 compute-0 sudo[155032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:12 compute-0 python3.9[155034]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:12 compute-0 sudo[155032]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:13 compute-0 sudo[155184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxaicrrjqxxcoadyreyyjmzxxivhqgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393553.062652-34-138564449024682/AnsiballZ_file.py'
Nov 29 05:19:13 compute-0 sudo[155184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:13 compute-0 python3.9[155186]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:13 compute-0 sudo[155184]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:14 compute-0 sudo[155336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjjrtwkfvjigarxpotdftbhwulhzufbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393553.874156-34-181399013352757/AnsiballZ_file.py'
Nov 29 05:19:14 compute-0 sudo[155336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:14 compute-0 python3.9[155338]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:14 compute-0 sudo[155336]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:14 compute-0 ceph-mon[75176]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:14 compute-0 sudo[155488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeyqaecijzkolpbfdckqpdtfwwhnfpxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393554.5824556-34-181972712998342/AnsiballZ_file.py'
Nov 29 05:19:14 compute-0 sudo[155488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:15 compute-0 python3.9[155490]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:15 compute-0 sudo[155488]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:15 compute-0 python3.9[155640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:19:16 compute-0 ceph-mon[75176]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:16 compute-0 sudo[155790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imdioiklltjcpuppubrveqamknqnpjql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393556.1850443-78-149608236573390/AnsiballZ_seboolean.py'
Nov 29 05:19:16 compute-0 sudo[155790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:16 compute-0 python3.9[155792]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 05:19:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:17 compute-0 sudo[155790]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:18 compute-0 python3.9[155942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:18 compute-0 ceph-mon[75176]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:19 compute-0 python3.9[156064]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393557.680408-86-62116215349733/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:19 compute-0 python3.9[156214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:20 compute-0 python3.9[156335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393559.3845565-101-76019781011967/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:20 compute-0 ceph-mon[75176]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:21 compute-0 sudo[156485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-likwzwsnbpqzgdytzviporhrczbqldom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393560.8320706-118-188418912009692/AnsiballZ_setup.py'
Nov 29 05:19:21 compute-0 sudo[156485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:21 compute-0 python3.9[156487]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:19:21 compute-0 sudo[156485]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:22 compute-0 sudo[156569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjxnpbxnxoewpcxfevakqzujoonmqpro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393560.8320706-118-188418912009692/AnsiballZ_dnf.py'
Nov 29 05:19:22 compute-0 sudo[156569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:22 compute-0 python3.9[156571]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:19:22 compute-0 ceph-mon[75176]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:23 compute-0 sudo[156569]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:24 compute-0 ceph-mon[75176]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:24 compute-0 sudo[156722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttptgcsfqynpdlkodbmaxpucncetuvye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393563.9484096-130-185086964783837/AnsiballZ_systemd.py'
Nov 29 05:19:24 compute-0 sudo[156722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:24 compute-0 python3.9[156724]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 05:19:25 compute-0 sudo[156722]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:25 compute-0 python3.9[156877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:26 compute-0 python3.9[156998]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393565.228068-138-1099252354488/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:26 compute-0 ceph-mon[75176]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:26 compute-0 python3.9[157148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:27 compute-0 python3.9[157269]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393566.3816183-138-171446976670684/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:28 compute-0 ceph-mon[75176]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:28 compute-0 python3.9[157419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:29 compute-0 python3.9[157540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393568.422645-182-269222877716292/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:30 compute-0 ovn_controller[153970]: 2025-11-29T05:19:30Z|00025|memory|INFO|17408 kB peak resident set size after 29.9 seconds
Nov 29 05:19:30 compute-0 ovn_controller[153970]: 2025-11-29T05:19:30Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 29 05:19:30 compute-0 podman[157664]: 2025-11-29 05:19:30.383119757 +0000 UTC m=+0.106061175 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 29 05:19:30 compute-0 python3.9[157699]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:30 compute-0 ceph-mon[75176]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:31 compute-0 python3.9[157838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393569.789758-182-189574762258839/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:31 compute-0 python3.9[157988]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:19:32 compute-0 sudo[158140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfxphhgqqgebstdbjaivoxnyrsoqkcme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393572.1173909-220-70080989706002/AnsiballZ_file.py'
Nov 29 05:19:32 compute-0 sudo[158140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:32 compute-0 python3.9[158142]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:32 compute-0 ceph-mon[75176]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:32 compute-0 sudo[158140]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:33 compute-0 sudo[158292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emedhvlinybalsmwhtbjzekfuaijlwwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393572.8664045-228-193775895338852/AnsiballZ_stat.py'
Nov 29 05:19:33 compute-0 sudo[158292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:33 compute-0 python3.9[158294]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:33 compute-0 sudo[158292]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:33 compute-0 sudo[158370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojannpugwqorgwfbgqjuhqqhmktaeefp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393572.8664045-228-193775895338852/AnsiballZ_file.py'
Nov 29 05:19:33 compute-0 sudo[158370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:33 compute-0 python3.9[158372]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:33 compute-0 sudo[158370]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:34 compute-0 sudo[158522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijkomdjartcrhfnvjhafzahofsieilwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393574.081526-228-164211044655487/AnsiballZ_stat.py'
Nov 29 05:19:34 compute-0 sudo[158522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:34 compute-0 python3.9[158524]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:34 compute-0 ceph-mon[75176]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:34 compute-0 sudo[158522]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:34 compute-0 sudo[158600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxnbjicgcotjauxxvlyietzbvzflfgjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393574.081526-228-164211044655487/AnsiballZ_file.py'
Nov 29 05:19:34 compute-0 sudo[158600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:35 compute-0 python3.9[158602]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:35 compute-0 sudo[158600]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:35 compute-0 sudo[158752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyerdxzdkfuwhqfldifagqtqikticdvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393575.380838-251-140102269280264/AnsiballZ_file.py'
Nov 29 05:19:35 compute-0 sudo[158752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:35 compute-0 python3.9[158754]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:19:35 compute-0 sudo[158752]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:36 compute-0 sudo[158904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlkujgreatffabyhnrfegldwkqbzhubh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393576.0842977-259-256351062674675/AnsiballZ_stat.py'
Nov 29 05:19:36 compute-0 sudo[158904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:36 compute-0 python3.9[158906]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:36 compute-0 sudo[158904]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:36 compute-0 ceph-mon[75176]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:36 compute-0 sudo[158982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddteqxqyzfwzfykcrlximokgzbgfjror ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393576.0842977-259-256351062674675/AnsiballZ_file.py'
Nov 29 05:19:36 compute-0 sudo[158982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:37 compute-0 python3.9[158984]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:19:37 compute-0 sudo[158982]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:37 compute-0 sudo[159134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quwdrrdoadrzzsbgdfyjgwcmnommujjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393577.3176203-271-6890491635102/AnsiballZ_stat.py'
Nov 29 05:19:37 compute-0 sudo[159134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:37 compute-0 python3.9[159136]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:37 compute-0 sudo[159134]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:19:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5568 writes, 24K keys, 5568 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5568 writes, 870 syncs, 6.40 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5568 writes, 24K keys, 5568 commit groups, 1.0 writes per commit group, ingest: 18.63 MB, 0.03 MB/s
                                           Interval WAL: 5568 writes, 870 syncs, 6.40 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:19:38 compute-0 sudo[159212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfkmdgbwmchvdnakgkyfoorjajiqzzbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393577.3176203-271-6890491635102/AnsiballZ_file.py'
Nov 29 05:19:38 compute-0 sudo[159212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:38 compute-0 python3.9[159214]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:19:38 compute-0 sudo[159212]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:38 compute-0 ceph-mon[75176]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:38 compute-0 sudo[159364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlzbhemjcsplbhxceqzdytyqksaiozln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393578.6601305-283-203233030265882/AnsiballZ_systemd.py'
Nov 29 05:19:38 compute-0 sudo[159364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:39 compute-0 python3.9[159366]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:19:39 compute-0 systemd[1]: Reloading.
Nov 29 05:19:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:39 compute-0 systemd-rc-local-generator[159392]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:19:39 compute-0 systemd-sysv-generator[159395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:19:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:39 compute-0 sudo[159364]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:40 compute-0 sudo[159552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwuwtvcauodkyluggapbzeuwluaaofgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393579.8816588-291-241815780919073/AnsiballZ_stat.py'
Nov 29 05:19:40 compute-0 sudo[159552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:40 compute-0 python3.9[159554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:40 compute-0 sudo[159552]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:40 compute-0 ceph-mon[75176]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:40 compute-0 sudo[159630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-affaxuksmlvkqrphsigssnxttfrndzzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393579.8816588-291-241815780919073/AnsiballZ_file.py'
Nov 29 05:19:40 compute-0 sudo[159630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:40 compute-0 python3.9[159632]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:19:40 compute-0 sudo[159630]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:19:41
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'backups', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'volumes']
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:19:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:41 compute-0 sudo[159782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfpewvqbxfdjottatojivdicwddpvsrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393581.124677-303-166226128799247/AnsiballZ_stat.py'
Nov 29 05:19:41 compute-0 sudo[159782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:41 compute-0 python3.9[159784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:41 compute-0 sudo[159782]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:42 compute-0 sudo[159860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsbkkdwjtrcbaoejylvnqmykcuwdxagd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393581.124677-303-166226128799247/AnsiballZ_file.py'
Nov 29 05:19:42 compute-0 sudo[159860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:42 compute-0 python3.9[159862]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:19:42 compute-0 sudo[159860]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:19:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6875 writes, 28K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6875 writes, 1210 syncs, 5.68 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6875 writes, 28K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 19.64 MB, 0.03 MB/s
                                           Interval WAL: 6875 writes, 1210 syncs, 5.68 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:19:42 compute-0 ceph-mon[75176]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:42 compute-0 sudo[160012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igpxudwsdowltkflrjsrlebprcvmoaqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393582.4943748-315-270180209408727/AnsiballZ_systemd.py'
Nov 29 05:19:42 compute-0 sudo[160012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:43 compute-0 python3.9[160014]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:19:43 compute-0 systemd[1]: Reloading.
Nov 29 05:19:43 compute-0 systemd-sysv-generator[160045]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:19:43 compute-0 systemd-rc-local-generator[160040]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:19:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:43 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 05:19:43 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 05:19:43 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 05:19:43 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 05:19:43 compute-0 sudo[160012]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:44 compute-0 sudo[160109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:19:44 compute-0 sudo[160109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:44 compute-0 sudo[160109]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:44 compute-0 sudo[160157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:19:44 compute-0 sudo[160157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:44 compute-0 sudo[160157]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:44 compute-0 sudo[160205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:19:44 compute-0 sudo[160205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:44 compute-0 sudo[160205]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:44 compute-0 sudo[160236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 05:19:44 compute-0 sudo[160236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:44 compute-0 sudo[160305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdwqdlgjfqzxybqbuonxkvpwtdjpaxzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393583.9388502-325-65485747714029/AnsiballZ_file.py'
Nov 29 05:19:44 compute-0 sudo[160305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:44 compute-0 python3.9[160307]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:44 compute-0 sudo[160236]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:19:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:19:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:19:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:19:44 compute-0 sudo[160305]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:44 compute-0 sudo[160327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:19:44 compute-0 sudo[160327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:44 compute-0 sudo[160327]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:44 compute-0 sudo[160376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:19:44 compute-0 sudo[160376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:44 compute-0 sudo[160376]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:44 compute-0 sudo[160401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:19:44 compute-0 sudo[160401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:44 compute-0 sudo[160401]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:44 compute-0 ceph-mon[75176]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:19:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:19:44 compute-0 sudo[160433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:19:44 compute-0 sudo[160433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:45 compute-0 sudo[160594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzwwxswkgvnhjdducpcespcyqzbmqpqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393584.7334507-333-189093488402961/AnsiballZ_stat.py'
Nov 29 05:19:45 compute-0 sudo[160594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:45 compute-0 sudo[160433]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:19:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:19:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:19:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:19:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:19:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:19:45 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d9f466a6-ab2c-4a2b-95c5-82afea8a2723 does not exist
Nov 29 05:19:45 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 6f945ae9-4ed7-4f7c-8ea3-1b559a334e33 does not exist
Nov 29 05:19:45 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 84eeb6d8-a70f-4323-89bb-5f47baa46c0f does not exist
Nov 29 05:19:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:19:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:19:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:19:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:19:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:19:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:19:45 compute-0 python3.9[160598]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:45 compute-0 sudo[160594]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:45 compute-0 sudo[160613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:19:45 compute-0 sudo[160613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:45 compute-0 sudo[160613]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:45 compute-0 sudo[160644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:19:45 compute-0 sudo[160644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:45 compute-0 sudo[160644]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:45 compute-0 sudo[160686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:19:45 compute-0 sudo[160686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:45 compute-0 sudo[160686]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:45 compute-0 sudo[160735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:19:45 compute-0 sudo[160735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:45 compute-0 sudo[160833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzlbnhewljrvucpfmzqesmqwzpcjukrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393584.7334507-333-189093488402961/AnsiballZ_copy.py'
Nov 29 05:19:45 compute-0 sudo[160833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:19:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:19:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:19:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:19:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:19:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:19:45 compute-0 podman[160876]: 2025-11-29 05:19:45.858079726 +0000 UTC m=+0.044302391 container create 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:19:45 compute-0 python3.9[160840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393584.7334507-333-189093488402961/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:45 compute-0 systemd[1]: Started libpod-conmon-9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514.scope.
Nov 29 05:19:45 compute-0 sudo[160833]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:45 compute-0 podman[160876]: 2025-11-29 05:19:45.835417217 +0000 UTC m=+0.021639902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:19:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:19:45 compute-0 podman[160876]: 2025-11-29 05:19:45.950707105 +0000 UTC m=+0.136929740 container init 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 05:19:45 compute-0 podman[160876]: 2025-11-29 05:19:45.965163927 +0000 UTC m=+0.151386562 container start 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:19:45 compute-0 podman[160876]: 2025-11-29 05:19:45.968546816 +0000 UTC m=+0.154769471 container attach 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 05:19:45 compute-0 upbeat_elion[160893]: 167 167
Nov 29 05:19:45 compute-0 systemd[1]: libpod-9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514.scope: Deactivated successfully.
Nov 29 05:19:45 compute-0 podman[160876]: 2025-11-29 05:19:45.971589247 +0000 UTC m=+0.157811892 container died 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:19:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1d92a3fe710eefb39590a09c32e1588e26ae5584f8a083693ad9be77e411c4c-merged.mount: Deactivated successfully.
Nov 29 05:19:46 compute-0 podman[160876]: 2025-11-29 05:19:46.008959545 +0000 UTC m=+0.195182190 container remove 9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_elion, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:19:46 compute-0 systemd[1]: libpod-conmon-9cf34f1ebcad9800af57b6db23f74b7d7f79680130c08c6b14a34454cd192514.scope: Deactivated successfully.
Nov 29 05:19:46 compute-0 podman[160940]: 2025-11-29 05:19:46.192938348 +0000 UTC m=+0.055385675 container create 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 05:19:46 compute-0 systemd[1]: Started libpod-conmon-6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638.scope.
Nov 29 05:19:46 compute-0 podman[160940]: 2025-11-29 05:19:46.165139394 +0000 UTC m=+0.027586821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:19:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:46 compute-0 podman[160940]: 2025-11-29 05:19:46.303705117 +0000 UTC m=+0.166152494 container init 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:19:46 compute-0 podman[160940]: 2025-11-29 05:19:46.325160333 +0000 UTC m=+0.187607670 container start 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:19:46 compute-0 podman[160940]: 2025-11-29 05:19:46.330500085 +0000 UTC m=+0.192947422 container attach 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:19:46 compute-0 ceph-mon[75176]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:46 compute-0 sudo[161088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxhlffsevxtxyznmpuatgebuqvvrtjjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393586.3953586-350-113142809105277/AnsiballZ_file.py'
Nov 29 05:19:46 compute-0 sudo[161088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:46 compute-0 python3.9[161090]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:19:47 compute-0 sudo[161088]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:47 compute-0 beautiful_brown[160957]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:19:47 compute-0 beautiful_brown[160957]: --> relative data size: 1.0
Nov 29 05:19:47 compute-0 beautiful_brown[160957]: --> All data devices are unavailable
Nov 29 05:19:47 compute-0 systemd[1]: libpod-6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638.scope: Deactivated successfully.
Nov 29 05:19:47 compute-0 podman[160940]: 2025-11-29 05:19:47.34303491 +0000 UTC m=+1.205482237 container died 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:19:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-66a67e728b64c67dc213aa6ca59622a357de7415a2c192321da089a8687f3043-merged.mount: Deactivated successfully.
Nov 29 05:19:47 compute-0 podman[160940]: 2025-11-29 05:19:47.392046895 +0000 UTC m=+1.254494222 container remove 6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 05:19:47 compute-0 systemd[1]: libpod-conmon-6227a5c5dad429e1b568a17d4300e019739c4892115b05853df3bd1e885f2638.scope: Deactivated successfully.
Nov 29 05:19:47 compute-0 sudo[160735]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:47 compute-0 sudo[161227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:19:47 compute-0 sudo[161227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:47 compute-0 sudo[161227]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:47 compute-0 sudo[161276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:19:47 compute-0 sudo[161276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:47 compute-0 sudo[161276]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:47 compute-0 sudo[161326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlfelkhsogctreqlibgdbofhffcbhfzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393587.234265-358-175979518254990/AnsiballZ_stat.py'
Nov 29 05:19:47 compute-0 sudo[161326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:47 compute-0 sudo[161329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:19:47 compute-0 sudo[161329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:47 compute-0 sudo[161329]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:47 compute-0 sudo[161355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:19:47 compute-0 sudo[161355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:47 compute-0 python3.9[161333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:19:47 compute-0 sudo[161326]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:19:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5451 writes, 23K keys, 5451 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5451 writes, 770 syncs, 7.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5451 writes, 23K keys, 5451 commit groups, 1.0 writes per commit group, ingest: 18.29 MB, 0.03 MB/s
                                           Interval WAL: 5451 writes, 770 syncs, 7.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:19:47 compute-0 podman[161420]: 2025-11-29 05:19:47.972950661 +0000 UTC m=+0.053175747 container create 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:19:48 compute-0 systemd[1]: Started libpod-conmon-2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0.scope.
Nov 29 05:19:48 compute-0 podman[161420]: 2025-11-29 05:19:47.954824831 +0000 UTC m=+0.035049887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:19:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:19:48 compute-0 podman[161420]: 2025-11-29 05:19:48.076522589 +0000 UTC m=+0.156747735 container init 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:19:48 compute-0 podman[161420]: 2025-11-29 05:19:48.083642777 +0000 UTC m=+0.163867853 container start 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:19:48 compute-0 podman[161420]: 2025-11-29 05:19:48.08832172 +0000 UTC m=+0.168546816 container attach 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 05:19:48 compute-0 agitated_villani[161483]: 167 167
Nov 29 05:19:48 compute-0 systemd[1]: libpod-2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0.scope: Deactivated successfully.
Nov 29 05:19:48 compute-0 podman[161420]: 2025-11-29 05:19:48.092528071 +0000 UTC m=+0.172753157 container died 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:19:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9903d32b0f0d125944bec1060df53019019714d02cdc8bef79a513a9989e4e1-merged.mount: Deactivated successfully.
Nov 29 05:19:48 compute-0 podman[161420]: 2025-11-29 05:19:48.138496186 +0000 UTC m=+0.218721272 container remove 2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_villani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:19:48 compute-0 systemd[1]: libpod-conmon-2a2c043fbe70f8e4cb5065778d9df38d09d153f09677c260c5495e32b28b31c0.scope: Deactivated successfully.
Nov 29 05:19:48 compute-0 sudo[161574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjcjmxiqshcyrvavtmfxlezgrkiyodvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393587.234265-358-175979518254990/AnsiballZ_copy.py'
Nov 29 05:19:48 compute-0 sudo[161574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:48 compute-0 podman[161581]: 2025-11-29 05:19:48.336136611 +0000 UTC m=+0.052356995 container create 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:19:48 compute-0 podman[161581]: 2025-11-29 05:19:48.314303364 +0000 UTC m=+0.030523778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:19:48 compute-0 systemd[1]: Started libpod-conmon-7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a.scope.
Nov 29 05:19:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16933d9143c24b5ecf22ef4d3a49003e7cb2fc8b5f16fe432c33f0a0ee82ba13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16933d9143c24b5ecf22ef4d3a49003e7cb2fc8b5f16fe432c33f0a0ee82ba13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16933d9143c24b5ecf22ef4d3a49003e7cb2fc8b5f16fe432c33f0a0ee82ba13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16933d9143c24b5ecf22ef4d3a49003e7cb2fc8b5f16fe432c33f0a0ee82ba13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:48 compute-0 podman[161581]: 2025-11-29 05:19:48.463916598 +0000 UTC m=+0.180137002 container init 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 05:19:48 compute-0 podman[161581]: 2025-11-29 05:19:48.470664317 +0000 UTC m=+0.186884711 container start 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 05:19:48 compute-0 podman[161581]: 2025-11-29 05:19:48.473887003 +0000 UTC m=+0.190107397 container attach 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 05:19:48 compute-0 python3.9[161583]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393587.234265-358-175979518254990/.source.json _original_basename=.swln47tk follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:19:48 compute-0 sudo[161574]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:48 compute-0 ceph-mon[75176]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:48 compute-0 sudo[161753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrwlsbcstsrzrcawlivhzqdxbbymnjrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393588.6932337-373-232841894424672/AnsiballZ_file.py'
Nov 29 05:19:48 compute-0 sudo[161753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:49 compute-0 python3.9[161755]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:19:49 compute-0 sudo[161753]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:49 compute-0 cool_hawking[161599]: {
Nov 29 05:19:49 compute-0 cool_hawking[161599]:     "0": [
Nov 29 05:19:49 compute-0 cool_hawking[161599]:         {
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "devices": [
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "/dev/loop3"
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             ],
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_name": "ceph_lv0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_size": "21470642176",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "name": "ceph_lv0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "tags": {
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.cluster_name": "ceph",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.crush_device_class": "",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.encrypted": "0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.osd_id": "0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.type": "block",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.vdo": "0"
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             },
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "type": "block",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "vg_name": "ceph_vg0"
Nov 29 05:19:49 compute-0 cool_hawking[161599]:         }
Nov 29 05:19:49 compute-0 cool_hawking[161599]:     ],
Nov 29 05:19:49 compute-0 cool_hawking[161599]:     "1": [
Nov 29 05:19:49 compute-0 cool_hawking[161599]:         {
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "devices": [
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "/dev/loop4"
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             ],
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_name": "ceph_lv1",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_size": "21470642176",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "name": "ceph_lv1",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "tags": {
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.cluster_name": "ceph",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.crush_device_class": "",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.encrypted": "0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.osd_id": "1",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.type": "block",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.vdo": "0"
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             },
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "type": "block",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "vg_name": "ceph_vg1"
Nov 29 05:19:49 compute-0 cool_hawking[161599]:         }
Nov 29 05:19:49 compute-0 cool_hawking[161599]:     ],
Nov 29 05:19:49 compute-0 cool_hawking[161599]:     "2": [
Nov 29 05:19:49 compute-0 cool_hawking[161599]:         {
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "devices": [
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "/dev/loop5"
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             ],
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_name": "ceph_lv2",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_size": "21470642176",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "name": "ceph_lv2",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "tags": {
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.cluster_name": "ceph",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.crush_device_class": "",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.encrypted": "0",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.osd_id": "2",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.type": "block",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:                 "ceph.vdo": "0"
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             },
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "type": "block",
Nov 29 05:19:49 compute-0 cool_hawking[161599]:             "vg_name": "ceph_vg2"
Nov 29 05:19:49 compute-0 cool_hawking[161599]:         }
Nov 29 05:19:49 compute-0 cool_hawking[161599]:     ]
Nov 29 05:19:49 compute-0 cool_hawking[161599]: }
Nov 29 05:19:49 compute-0 systemd[1]: libpod-7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a.scope: Deactivated successfully.
Nov 29 05:19:49 compute-0 podman[161581]: 2025-11-29 05:19:49.257089035 +0000 UTC m=+0.973309469 container died 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:19:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-16933d9143c24b5ecf22ef4d3a49003e7cb2fc8b5f16fe432c33f0a0ee82ba13-merged.mount: Deactivated successfully.
Nov 29 05:19:49 compute-0 podman[161581]: 2025-11-29 05:19:49.332788726 +0000 UTC m=+1.049009120 container remove 7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:19:49 compute-0 systemd[1]: libpod-conmon-7a727f72f0c8a48b6c59a25a946e81697bf1e5f8484b215ee644022e1055ba5a.scope: Deactivated successfully.
Nov 29 05:19:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:49 compute-0 sudo[161355]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:49 compute-0 sudo[161817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:19:49 compute-0 sudo[161817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:49 compute-0 sudo[161817]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:49 compute-0 sudo[161875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:19:49 compute-0 sudo[161875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:49 compute-0 sudo[161875]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:49 compute-0 sudo[161924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:19:49 compute-0 sudo[161924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:49 compute-0 sudo[161924]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:49 compute-0 sudo[161972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:19:49 compute-0 sudo[161972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:49 compute-0 sudo[162024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwrmutqzyjvyqjwlqgtrowplfbzkfztt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393589.4254396-381-261990417184726/AnsiballZ_stat.py'
Nov 29 05:19:49 compute-0 sudo[162024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:49 compute-0 sudo[162024]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:50 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Check health
Nov 29 05:19:50 compute-0 podman[162092]: 2025-11-29 05:19:50.17693753 +0000 UTC m=+0.057132701 container create 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:19:50 compute-0 systemd[1]: Started libpod-conmon-3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619.scope.
Nov 29 05:19:50 compute-0 podman[162092]: 2025-11-29 05:19:50.158183654 +0000 UTC m=+0.038378855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:19:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:19:50 compute-0 podman[162092]: 2025-11-29 05:19:50.2836284 +0000 UTC m=+0.163823661 container init 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 05:19:50 compute-0 podman[162092]: 2025-11-29 05:19:50.290360168 +0000 UTC m=+0.170555359 container start 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 05:19:50 compute-0 podman[162092]: 2025-11-29 05:19:50.29458387 +0000 UTC m=+0.174779061 container attach 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:19:50 compute-0 vigorous_khayyam[162153]: 167 167
Nov 29 05:19:50 compute-0 systemd[1]: libpod-3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619.scope: Deactivated successfully.
Nov 29 05:19:50 compute-0 podman[162092]: 2025-11-29 05:19:50.296076889 +0000 UTC m=+0.176272080 container died 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:19:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-593f2b7871581a1b790f186ab0e1a6684b1fc8653ff154a9c03f085c1f2d01ac-merged.mount: Deactivated successfully.
Nov 29 05:19:50 compute-0 podman[162092]: 2025-11-29 05:19:50.341628873 +0000 UTC m=+0.221824064 container remove 3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:19:50 compute-0 systemd[1]: libpod-conmon-3787e769840b43dc2189756eaff1b2d5cf7de8b93e74067d2d7010920ddf3619.scope: Deactivated successfully.
Nov 29 05:19:50 compute-0 sudo[162225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocrhmkunpvrpdizcpwqpgsdkqyueyozr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393589.4254396-381-261990417184726/AnsiballZ_copy.py'
Nov 29 05:19:50 compute-0 sudo[162225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:50 compute-0 podman[162233]: 2025-11-29 05:19:50.531924073 +0000 UTC m=+0.058972210 container create 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:19:50 compute-0 sshd-session[160497]: Received disconnect from 101.47.141.125 port 54746:11: Bye Bye [preauth]
Nov 29 05:19:50 compute-0 sshd-session[160497]: Disconnected from authenticating user root 101.47.141.125 port 54746 [preauth]
Nov 29 05:19:50 compute-0 systemd[1]: Started libpod-conmon-989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2.scope.
Nov 29 05:19:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7edbbe30212488ae40e72b52eeba3721963a0526bcf6f93c889827ba700c8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7edbbe30212488ae40e72b52eeba3721963a0526bcf6f93c889827ba700c8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7edbbe30212488ae40e72b52eeba3721963a0526bcf6f93c889827ba700c8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7edbbe30212488ae40e72b52eeba3721963a0526bcf6f93c889827ba700c8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:19:50 compute-0 podman[162233]: 2025-11-29 05:19:50.509957843 +0000 UTC m=+0.037005980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:19:50 compute-0 podman[162233]: 2025-11-29 05:19:50.604727458 +0000 UTC m=+0.131775615 container init 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:19:50 compute-0 podman[162233]: 2025-11-29 05:19:50.620380192 +0000 UTC m=+0.147428299 container start 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:19:50 compute-0 podman[162233]: 2025-11-29 05:19:50.623931905 +0000 UTC m=+0.150980022 container attach 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:19:50 compute-0 sudo[162225]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:50 compute-0 ceph-mon[75176]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:19:51 compute-0 sudo[162418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxghvnnawqxfmasijwsmwmsywoemjxpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393590.9286356-398-254447893273403/AnsiballZ_container_config_data.py'
Nov 29 05:19:51 compute-0 sudo[162418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:51 compute-0 objective_tharp[162250]: {
Nov 29 05:19:51 compute-0 objective_tharp[162250]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "osd_id": 0,
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "type": "bluestore"
Nov 29 05:19:51 compute-0 objective_tharp[162250]:     },
Nov 29 05:19:51 compute-0 objective_tharp[162250]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "osd_id": 1,
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "type": "bluestore"
Nov 29 05:19:51 compute-0 objective_tharp[162250]:     },
Nov 29 05:19:51 compute-0 objective_tharp[162250]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "osd_id": 2,
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:19:51 compute-0 objective_tharp[162250]:         "type": "bluestore"
Nov 29 05:19:51 compute-0 objective_tharp[162250]:     }
Nov 29 05:19:51 compute-0 objective_tharp[162250]: }
Nov 29 05:19:51 compute-0 systemd[1]: libpod-989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2.scope: Deactivated successfully.
Nov 29 05:19:51 compute-0 podman[162233]: 2025-11-29 05:19:51.59672998 +0000 UTC m=+1.123778077 container died 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:19:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e7edbbe30212488ae40e72b52eeba3721963a0526bcf6f93c889827ba700c8c-merged.mount: Deactivated successfully.
Nov 29 05:19:51 compute-0 podman[162233]: 2025-11-29 05:19:51.661494082 +0000 UTC m=+1.188542189 container remove 989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 29 05:19:51 compute-0 python3.9[162423]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 05:19:51 compute-0 systemd[1]: libpod-conmon-989ecd627b51ba50f8073f526a017fa15264642062dd37c170c36137a82c73c2.scope: Deactivated successfully.
Nov 29 05:19:51 compute-0 sudo[161972]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:51 compute-0 sudo[162418]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:19:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:19:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:19:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 3881b643-1e8b-442f-9b3f-ce629860a544 does not exist
Nov 29 05:19:51 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 4ae5a2d9-3876-4d7c-9f2a-564aac206b00 does not exist
Nov 29 05:19:51 compute-0 sudo[162449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:19:51 compute-0 sudo[162449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:51 compute-0 sudo[162449]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:51 compute-0 sudo[162498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:19:51 compute-0 sudo[162498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:19:51 compute-0 sudo[162498]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:52 compute-0 sudo[162648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iffkqhezddrzaxctisthnpzjcdcgoane ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393591.9475093-407-237819653341786/AnsiballZ_container_config_hash.py'
Nov 29 05:19:52 compute-0 sudo[162648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:52 compute-0 ceph-mon[75176]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:19:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:19:52 compute-0 python3.9[162650]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 05:19:52 compute-0 sudo[162648]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:53 compute-0 sudo[162800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipazvealoxubmjiexwtqhzikynvuposb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393593.1433468-416-266625698581762/AnsiballZ_podman_container_info.py'
Nov 29 05:19:53 compute-0 sudo[162800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:54 compute-0 python3.9[162802]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 05:19:54 compute-0 sudo[162800]: pam_unix(sudo:session): session closed for user root
Nov 29 05:19:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:54 compute-0 ceph-mon[75176]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:55 compute-0 sudo[162979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teppzadgdrilnijheswvxtetcnywobgb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764393594.8569272-429-278745231634538/AnsiballZ_edpm_container_manage.py'
Nov 29 05:19:55 compute-0 sudo[162979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:19:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:55 compute-0 python3[162981]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 05:19:56 compute-0 ceph-mon[75176]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:59 compute-0 ceph-mon[75176]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:19:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:19:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:00 compute-0 ceph-mon[75176]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:02 compute-0 podman[163060]: 2025-11-29 05:20:02.325404508 +0000 UTC m=+1.380331828 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 05:20:03 compute-0 ceph-mon[75176]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:04 compute-0 ceph-mon[75176]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:05 compute-0 podman[162993]: 2025-11-29 05:20:05.397692211 +0000 UTC m=+9.727755362 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 05:20:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:05 compute-0 podman[163155]: 2025-11-29 05:20:05.539647333 +0000 UTC m=+0.025352681 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 05:20:05 compute-0 podman[163155]: 2025-11-29 05:20:05.685068678 +0000 UTC m=+0.170774016 container create 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 05:20:05 compute-0 python3[162981]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 05:20:05 compute-0 sudo[162979]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:06 compute-0 sudo[163343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmyzxtqqibsrmpkrofvwjpwwrxcgqrlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393606.0037317-437-280755082973937/AnsiballZ_stat.py'
Nov 29 05:20:06 compute-0 sudo[163343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:06 compute-0 python3.9[163345]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:20:06 compute-0 sudo[163343]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:06 compute-0 ceph-mon[75176]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:07 compute-0 sudo[163497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txhnvyrhzkqgqkhtdlpyqkufgekvxkkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393606.7569907-446-142301867759700/AnsiballZ_file.py'
Nov 29 05:20:07 compute-0 sudo[163497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:07 compute-0 python3.9[163499]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:07 compute-0 sudo[163497]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:07 compute-0 sudo[163573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awxqpjfcefvlfsqtqbiewwwlacuukrgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393606.7569907-446-142301867759700/AnsiballZ_stat.py'
Nov 29 05:20:07 compute-0 sudo[163573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:07 compute-0 python3.9[163575]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:20:07 compute-0 sudo[163573]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:08 compute-0 sudo[163724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nitgnhduphlusdszlbijgdplkmgocixy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393607.8428926-446-135762990385855/AnsiballZ_copy.py'
Nov 29 05:20:08 compute-0 sudo[163724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:08 compute-0 python3.9[163726]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393607.8428926-446-135762990385855/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:08 compute-0 sudo[163724]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:08 compute-0 sudo[163800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buhfygzqvucqjeczztdpyduwcwzvhgex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393607.8428926-446-135762990385855/AnsiballZ_systemd.py'
Nov 29 05:20:08 compute-0 sudo[163800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:08 compute-0 ceph-mon[75176]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:09 compute-0 python3.9[163802]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 05:20:09 compute-0 systemd[1]: Reloading.
Nov 29 05:20:09 compute-0 systemd-rc-local-generator[163822]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:20:09 compute-0 systemd-sysv-generator[163827]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:20:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:09 compute-0 sudo[163800]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:09 compute-0 sudo[163910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mructcpprtkmvesxlsibzemqyocjqmmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393607.8428926-446-135762990385855/AnsiballZ_systemd.py'
Nov 29 05:20:09 compute-0 sudo[163910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:10 compute-0 python3.9[163912]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:20:10 compute-0 ceph-mon[75176]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:11 compute-0 systemd[1]: Reloading.
Nov 29 05:20:11 compute-0 systemd-rc-local-generator[163932]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:20:11 compute-0 systemd-sysv-generator[163941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:20:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:20:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:20:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:20:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:20:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:20:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:20:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:11 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 05:20:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/398c4609a306f444849b2deffb49598961a5888b15151fc3ead216c4ea0f6244/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/398c4609a306f444849b2deffb49598961a5888b15151fc3ead216c4ea0f6244/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209.
Nov 29 05:20:11 compute-0 podman[163953]: 2025-11-29 05:20:11.758841429 +0000 UTC m=+0.146842152 container init 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: + sudo -E kolla_set_configs
Nov 29 05:20:11 compute-0 podman[163953]: 2025-11-29 05:20:11.809376405 +0000 UTC m=+0.197377098 container start 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:20:11 compute-0 edpm-start-podman-container[163953]: ovn_metadata_agent
Nov 29 05:20:11 compute-0 podman[163975]: 2025-11-29 05:20:11.879422393 +0000 UTC m=+0.058664927 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Validating config file
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 05:20:11 compute-0 edpm-start-podman-container[163952]: Creating additional drop-in dependency for "ovn_metadata_agent" (5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209)
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Copying service configuration files
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Writing out command to execute
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: ++ cat /run_command
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: + CMD=neutron-ovn-metadata-agent
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: + ARGS=
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: + sudo kolla_copy_cacerts
Nov 29 05:20:11 compute-0 systemd[1]: Reloading.
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: + [[ ! -n '' ]]
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: + . kolla_extend_start
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: + umask 0022
Nov 29 05:20:11 compute-0 ovn_metadata_agent[163968]: + exec neutron-ovn-metadata-agent
Nov 29 05:20:11 compute-0 systemd-sysv-generator[164050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:20:12 compute-0 systemd-rc-local-generator[164044]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:20:12 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 29 05:20:12 compute-0 sudo[163910]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:12 compute-0 sshd-session[154574]: Connection closed by 192.168.122.30 port 50240
Nov 29 05:20:12 compute-0 sshd-session[154571]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:20:12 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Nov 29 05:20:12 compute-0 systemd[1]: session-47.scope: Consumed 56.494s CPU time.
Nov 29 05:20:12 compute-0 systemd-logind[793]: Session 47 logged out. Waiting for processes to exit.
Nov 29 05:20:12 compute-0 systemd-logind[793]: Removed session 47.
Nov 29 05:20:12 compute-0 ceph-mon[75176]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.692 163973 INFO neutron.common.config [-] Logging enabled!
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.692 163973 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.693 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.694 163973 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.695 163973 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.696 163973 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.697 163973 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.698 163973 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.699 163973 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.700 163973 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.701 163973 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.702 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.703 163973 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.704 163973 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.705 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.706 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.707 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.708 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.709 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.710 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.711 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.712 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.713 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.714 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.715 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.716 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.717 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.718 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.719 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.720 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.721 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.722 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.723 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.724 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.725 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.726 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.727 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.728 163973 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.739 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.739 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.740 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.740 163973 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.740 163973 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.754 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 63cfe9d2-e938-418d-9401-5d1a600b4ede (UUID: 63cfe9d2-e938-418d-9401-5d1a600b4ede) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.781 163973 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.782 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.782 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.782 163973 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.784 163973 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.791 163973 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.796 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '63cfe9d2-e938-418d-9401-5d1a600b4ede'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f08f06f3e80>], external_ids={}, name=63cfe9d2-e938-418d-9401-5d1a600b4ede, nb_cfg_timestamp=1764393548543, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.797 163973 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f08f06f6b20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.798 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.798 163973 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.799 163973 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.799 163973 INFO oslo_service.service [-] Starting 1 workers
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.803 163973 DEBUG oslo_service.service [-] Started child 164082 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.806 163973 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmps4tl4zy4/privsep.sock']
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.808 164082 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-1022618'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.844 164082 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.845 164082 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.845 164082 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.850 164082 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.860 164082 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 05:20:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:13.869 164082 INFO eventlet.wsgi.server [-] (164082) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 29 05:20:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:14 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 05:20:14 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.496 163973 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 05:20:14 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.497 163973 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps4tl4zy4/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 05:20:14 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.369 164087 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 05:20:14 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.377 164087 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 05:20:14 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:20:14 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.386 164087 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 29 05:20:14 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.387 164087 INFO oslo.privsep.daemon [-] privsep daemon running as pid 164087
Nov 29 05:20:14 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.502 164087 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf6b835-d36d-44df-baa0-f1c0329f554c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 05:20:14 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.989 164087 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:20:14 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.989 164087 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:20:14 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:14.989 164087 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:20:15 compute-0 ceph-mon[75176]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.540 164087 DEBUG oslo.privsep.daemon [-] privsep: reply[ef714392-a3f0-43a3-b011-b3db1bee63d6]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.542 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, column=external_ids, values=({'neutron:ovn-metadata-id': '44af6163-09e8-5582-b53f-e0fe312da172'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.551 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.558 163973 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.559 163973 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.560 163973 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.562 163973 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.562 163973 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.562 163973 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.563 163973 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.563 163973 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.563 163973 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.563 163973 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.563 163973 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.564 163973 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.565 163973 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.566 163973 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.567 163973 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.568 163973 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.569 163973 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.570 163973 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.571 163973 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.572 163973 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.573 163973 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.574 163973 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.575 163973 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.576 163973 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.577 163973 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.578 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.579 163973 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.580 163973 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.581 163973 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.582 163973 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.583 163973 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.584 163973 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.585 163973 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.586 163973 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.587 163973 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.588 163973 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.589 163973 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.590 163973 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.591 163973 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.592 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.593 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.594 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.595 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:20:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:20:15.596 163973 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 05:20:17 compute-0 ceph-mon[75176]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:18 compute-0 sshd-session[164093]: Accepted publickey for zuul from 192.168.122.30 port 56172 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:20:18 compute-0 systemd-logind[793]: New session 48 of user zuul.
Nov 29 05:20:18 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 29 05:20:18 compute-0 sshd-session[164093]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:20:19 compute-0 ceph-mon[75176]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:19 compute-0 python3.9[164246]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:20:20 compute-0 sudo[164400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shjzmvjzriusplxqdihiqxsfoevttbmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393620.308315-34-183517876069574/AnsiballZ_command.py'
Nov 29 05:20:20 compute-0 sudo[164400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:20 compute-0 python3.9[164402]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:20:21 compute-0 ceph-mon[75176]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:21 compute-0 sudo[164400]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:22 compute-0 sudo[164565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlfktujarckmbtqvkjhzqfyjcyydouid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393621.3866837-45-157070097418052/AnsiballZ_systemd_service.py'
Nov 29 05:20:22 compute-0 sudo[164565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:22 compute-0 python3.9[164567]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 05:20:22 compute-0 systemd[1]: Reloading.
Nov 29 05:20:22 compute-0 systemd-sysv-generator[164591]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:20:22 compute-0 systemd-rc-local-generator[164587]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:20:22 compute-0 sudo[164565]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:23 compute-0 ceph-mon[75176]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:23 compute-0 python3.9[164751]: ansible-ansible.builtin.service_facts Invoked
Nov 29 05:20:23 compute-0 network[164768]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 05:20:23 compute-0 network[164769]: 'network-scripts' will be removed from distribution in near future.
Nov 29 05:20:23 compute-0 network[164770]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 05:20:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:25 compute-0 ceph-mon[75176]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:27 compute-0 ceph-mon[75176]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:29 compute-0 ceph-mon[75176]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:29 compute-0 sudo[165031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gthbqlxxpqneltfrzizuekuhaatokibx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393629.07983-64-162634683481183/AnsiballZ_systemd_service.py'
Nov 29 05:20:29 compute-0 sudo[165031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:29 compute-0 python3.9[165033]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:20:29 compute-0 sudo[165031]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:30 compute-0 sudo[165184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwlbbwflqkoonfjogmtcoklsjfjbaywk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393629.8561082-64-213766202262026/AnsiballZ_systemd_service.py'
Nov 29 05:20:30 compute-0 sudo[165184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:30 compute-0 python3.9[165186]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:20:30 compute-0 sudo[165184]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:31 compute-0 ceph-mon[75176]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:31 compute-0 sudo[165337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsgeuylwbeumyylkrmmlficsmjfxatyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393630.726595-64-43692077447943/AnsiballZ_systemd_service.py'
Nov 29 05:20:31 compute-0 sudo[165337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:31 compute-0 python3.9[165339]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:20:31 compute-0 sudo[165337]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:32 compute-0 sudo[165490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptalahhmygpzyeitydwagxxoxlrcqzky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393631.7707446-64-11166897680430/AnsiballZ_systemd_service.py'
Nov 29 05:20:32 compute-0 sudo[165490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:32 compute-0 python3.9[165492]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:20:32 compute-0 sudo[165490]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:32 compute-0 sudo[165643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqpeyohyftcnufblaufbyfbzvvordbji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393632.598538-64-202069412344416/AnsiballZ_systemd_service.py'
Nov 29 05:20:32 compute-0 sudo[165643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:33 compute-0 python3.9[165645]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:20:33 compute-0 sudo[165643]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:33 compute-0 ceph-mon[75176]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:33 compute-0 podman[165647]: 2025-11-29 05:20:33.347924327 +0000 UTC m=+0.123261830 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 05:20:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:33 compute-0 sudo[165822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbgjywcrsgcqcgqsvwrmkiqrszbtighl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393633.3631334-64-261129532265915/AnsiballZ_systemd_service.py'
Nov 29 05:20:33 compute-0 sudo[165822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:34 compute-0 python3.9[165824]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:20:34 compute-0 sudo[165822]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:34 compute-0 ceph-mon[75176]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:34 compute-0 sudo[165975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpvnuuhcnidvmwsohoiqubjnwyfqgbqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393634.1647596-64-139305043708854/AnsiballZ_systemd_service.py'
Nov 29 05:20:34 compute-0 sudo[165975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:34 compute-0 python3.9[165977]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:20:34 compute-0 sudo[165975]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:35 compute-0 sudo[166128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ougsngziqvmpllpyzohxmbaeysmovmwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393635.137887-116-267182104833245/AnsiballZ_file.py'
Nov 29 05:20:35 compute-0 sudo[166128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:35 compute-0 sshd-session[164806]: error: kex_exchange_identification: read: Connection timed out
Nov 29 05:20:35 compute-0 sshd-session[164806]: banner exchange: Connection from 120.48.20.114 port 52448: Connection timed out
Nov 29 05:20:35 compute-0 python3.9[166130]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:36 compute-0 sudo[166128]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:36 compute-0 sudo[166280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsmywapypnvlzzyhqsgzppexwcfoewbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393636.1329954-116-83944487596168/AnsiballZ_file.py'
Nov 29 05:20:36 compute-0 sudo[166280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:36 compute-0 python3.9[166282]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:36 compute-0 sudo[166280]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:36 compute-0 ceph-mon[75176]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:37 compute-0 sudo[166432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeegnvmttxujpglidambkzwljovttbdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393636.791576-116-36678806825892/AnsiballZ_file.py'
Nov 29 05:20:37 compute-0 sudo[166432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:37 compute-0 python3.9[166434]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:37 compute-0 sudo[166432]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:37 compute-0 sudo[166584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcmtxserxbxvqjuekpbokmyqtixscxjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393637.5227687-116-220808865626317/AnsiballZ_file.py'
Nov 29 05:20:37 compute-0 sudo[166584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:38 compute-0 python3.9[166586]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:38 compute-0 sudo[166584]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:38 compute-0 sudo[166736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxekikcurrurgnnjecdcxklifwmtvonq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393638.2563353-116-65265054481356/AnsiballZ_file.py'
Nov 29 05:20:38 compute-0 sudo[166736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:38 compute-0 python3.9[166738]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:38 compute-0 sudo[166736]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:39 compute-0 ceph-mon[75176]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:39 compute-0 sudo[166888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvxzwfjpshehfnwbrfzdonypdkzuvdzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393638.9917316-116-12465714895965/AnsiballZ_file.py'
Nov 29 05:20:39 compute-0 sudo[166888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:39 compute-0 python3.9[166890]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:39 compute-0 sudo[166888]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:40 compute-0 sudo[167040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksnkbevbrqfojzrhbkwukudeveacbkyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393639.7347317-116-257463572836849/AnsiballZ_file.py'
Nov 29 05:20:40 compute-0 sudo[167040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:40 compute-0 python3.9[167042]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:40 compute-0 sudo[167040]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:40 compute-0 sudo[167192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpucudhpbnzrkmefzuabgktdcyagkzsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393640.5395408-166-74957154802093/AnsiballZ_file.py'
Nov 29 05:20:40 compute-0 sudo[167192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:41 compute-0 ceph-mon[75176]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:41 compute-0 python3.9[167194]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:41 compute-0 sudo[167192]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:20:41
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data']
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:20:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:41 compute-0 sudo[167344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcaufzyavtdjiyfhzmprmazlhtympgzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393641.2298265-166-210896372186476/AnsiballZ_file.py'
Nov 29 05:20:41 compute-0 sudo[167344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:41 compute-0 python3.9[167346]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:41 compute-0 sudo[167344]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:42 compute-0 podman[167377]: 2025-11-29 05:20:42.021078259 +0000 UTC m=+0.077964826 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 29 05:20:42 compute-0 sudo[167517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxisjnxvxzkhiyxgalrublsxmsnazgpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393641.9590297-166-163201018060623/AnsiballZ_file.py'
Nov 29 05:20:42 compute-0 sudo[167517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:42 compute-0 python3.9[167519]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:42 compute-0 sudo[167517]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:42 compute-0 sudo[167669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krngczwcgwuuccgvxckptqxbwwqlbixi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393642.5880072-166-20564230216641/AnsiballZ_file.py'
Nov 29 05:20:42 compute-0 sudo[167669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:43 compute-0 ceph-mon[75176]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:43 compute-0 python3.9[167671]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:43 compute-0 sudo[167669]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:43 compute-0 sudo[167821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfqetksvokgjkqfhlraqucjajivaathg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393643.293999-166-207054521915866/AnsiballZ_file.py'
Nov 29 05:20:43 compute-0 sudo[167821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:43 compute-0 python3.9[167823]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:43 compute-0 sudo[167821]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:44 compute-0 sudo[167973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhrndcjuyemikbuegifavrpxxbafxgvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393644.0682883-166-221748998049377/AnsiballZ_file.py'
Nov 29 05:20:44 compute-0 sudo[167973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:44 compute-0 python3.9[167975]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:44 compute-0 sudo[167973]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:45 compute-0 ceph-mon[75176]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:45 compute-0 sudo[168125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmnrwascldwnykefdrvsdxyjvlvgyfnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393644.7728689-166-108646158651847/AnsiballZ_file.py'
Nov 29 05:20:45 compute-0 sudo[168125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:45 compute-0 python3.9[168127]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:20:45 compute-0 sudo[168125]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:45 compute-0 sudo[168277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwlgyggdsejusdloxmkogcwmhrossvrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393645.6496875-217-141142143839106/AnsiballZ_command.py'
Nov 29 05:20:45 compute-0 sudo[168277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:46 compute-0 python3.9[168279]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:20:46 compute-0 sudo[168277]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:47 compute-0 ceph-mon[75176]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:47 compute-0 python3.9[168431]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 05:20:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:47 compute-0 sudo[168581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovkcwrkonjfazgtxpmyboiowpjkdznpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393647.379682-235-31615338175121/AnsiballZ_systemd_service.py'
Nov 29 05:20:47 compute-0 sudo[168581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:47 compute-0 python3.9[168583]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 05:20:47 compute-0 systemd[1]: Reloading.
Nov 29 05:20:48 compute-0 systemd-sysv-generator[168613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:20:48 compute-0 systemd-rc-local-generator[168609]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:20:48 compute-0 sudo[168581]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:48 compute-0 sudo[168769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbxwxuoaawdigozvxndxfudyegjmrlmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393648.5967035-243-151223817416359/AnsiballZ_command.py'
Nov 29 05:20:48 compute-0 sudo[168769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:49 compute-0 ceph-mon[75176]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:49 compute-0 python3.9[168771]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:20:49 compute-0 sudo[168769]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:49 compute-0 sudo[168922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbbuhxgvqobazmdngncezgxhrhmoxccy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393649.3175225-243-83115456844472/AnsiballZ_command.py'
Nov 29 05:20:49 compute-0 sudo[168922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:49 compute-0 python3.9[168924]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:20:49 compute-0 sudo[168922]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:50 compute-0 sudo[169075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vndeeyiabiasxbjcwvdcoicaswtyxgub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393650.037355-243-9521990172583/AnsiballZ_command.py'
Nov 29 05:20:50 compute-0 sudo[169075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:50 compute-0 python3.9[169077]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:20:50 compute-0 sudo[169075]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:51 compute-0 ceph-mon[75176]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:20:51 compute-0 sudo[169228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyofpztxbqgomfmnmbzmolbzzauksdck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393650.7222235-243-21852872694069/AnsiballZ_command.py'
Nov 29 05:20:51 compute-0 sudo[169228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:51 compute-0 python3.9[169230]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:20:51 compute-0 sudo[169228]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:51 compute-0 sudo[169351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:20:51 compute-0 sudo[169351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:51 compute-0 sudo[169351]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:51 compute-0 sudo[169406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbgmfqsmgyyyvpcupcqmkttnsdnxibwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393651.637664-243-69071523769996/AnsiballZ_command.py'
Nov 29 05:20:51 compute-0 sudo[169406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:51 compute-0 sudo[169407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:20:51 compute-0 sudo[169407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:51 compute-0 sudo[169407]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:52 compute-0 sudo[169434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:20:52 compute-0 sudo[169434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:52 compute-0 sudo[169434]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:52 compute-0 sudo[169459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:20:52 compute-0 sudo[169459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:52 compute-0 python3.9[169425]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:20:52 compute-0 sudo[169406]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:52 compute-0 podman[169657]: 2025-11-29 05:20:52.647445765 +0000 UTC m=+0.091990714 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:20:52 compute-0 sudo[169728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luwucalmpyfndrrcbkcslriuiulffwwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393652.3179796-243-92613273522424/AnsiballZ_command.py'
Nov 29 05:20:52 compute-0 sudo[169728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:52 compute-0 podman[169657]: 2025-11-29 05:20:52.733907281 +0000 UTC m=+0.178452200 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:20:52 compute-0 python3.9[169730]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:20:52 compute-0 sudo[169728]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:53 compute-0 ceph-mon[75176]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:53 compute-0 sudo[169459]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:20:53 compute-0 sudo[170018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ennbxdmdvhrhnxthgprumhvuqvugellb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393653.1212785-243-157842275634478/AnsiballZ_command.py'
Nov 29 05:20:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:20:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:20:53 compute-0 sudo[170018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:20:53 compute-0 sudo[170021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:20:53 compute-0 sudo[170021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:53 compute-0 sudo[170021]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:53 compute-0 sudo[170046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:20:53 compute-0 sudo[170046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:53 compute-0 sudo[170046]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:53 compute-0 sudo[170071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:20:53 compute-0 sudo[170071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:53 compute-0 sudo[170071]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:53 compute-0 python3.9[170020]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:20:53 compute-0 sudo[170096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:20:53 compute-0 sudo[170096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:53 compute-0 sudo[170018]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:54 compute-0 sudo[170096]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 05:20:54 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:20:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:20:54 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:20:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:20:54 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:20:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:20:54 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:20:54 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a291ce35-193f-40db-8abb-0a4d82f3531b does not exist
Nov 29 05:20:54 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 7d5867d4-556a-4d75-b623-16a4a860d68a does not exist
Nov 29 05:20:54 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 39726ccc-5ae8-42ed-aa0c-f2117e4c58b3 does not exist
Nov 29 05:20:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:20:54 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:20:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:20:54 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:20:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:20:54 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:20:54 compute-0 sudo[170247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:20:54 compute-0 sudo[170247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:54 compute-0 sudo[170247]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:54 compute-0 sudo[170298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:20:54 compute-0 sudo[170298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:54 compute-0 sudo[170298]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:20:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:20:54 compute-0 ceph-mon[75176]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:20:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:20:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:20:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:20:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:20:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:20:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:20:54 compute-0 sudo[170361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewaalqindtmcyrfmzrsgqarbeyqywxtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393653.9581165-297-223601663277040/AnsiballZ_getent.py'
Nov 29 05:20:54 compute-0 sudo[170361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:54 compute-0 sudo[170345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:20:54 compute-0 sudo[170345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:54 compute-0 sudo[170345]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:54 compute-0 sudo[170378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:20:54 compute-0 sudo[170378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:54 compute-0 python3.9[170375]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 05:20:54 compute-0 sudo[170361]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:54 compute-0 podman[170491]: 2025-11-29 05:20:54.896569391 +0000 UTC m=+0.053497388 container create 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 05:20:54 compute-0 systemd[1]: Started libpod-conmon-36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161.scope.
Nov 29 05:20:54 compute-0 podman[170491]: 2025-11-29 05:20:54.866078566 +0000 UTC m=+0.023006613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:20:54 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:20:54 compute-0 podman[170491]: 2025-11-29 05:20:54.995024285 +0000 UTC m=+0.151952332 container init 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 05:20:55 compute-0 podman[170491]: 2025-11-29 05:20:55.004245234 +0000 UTC m=+0.161173201 container start 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 05:20:55 compute-0 podman[170491]: 2025-11-29 05:20:55.007606808 +0000 UTC m=+0.164534795 container attach 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:20:55 compute-0 serene_wing[170537]: 167 167
Nov 29 05:20:55 compute-0 systemd[1]: libpod-36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161.scope: Deactivated successfully.
Nov 29 05:20:55 compute-0 podman[170491]: 2025-11-29 05:20:55.012012827 +0000 UTC m=+0.168940804 container died 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:20:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-123a45dd65a22c0d1e3c3cff0d31b7baab68c6abfc05a89a1e9c337841d0c7f2-merged.mount: Deactivated successfully.
Nov 29 05:20:55 compute-0 podman[170491]: 2025-11-29 05:20:55.052686466 +0000 UTC m=+0.209614443 container remove 36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 05:20:55 compute-0 systemd[1]: libpod-conmon-36b3977551d1bddfbd83e4379623e28b072266c30e6a57f51ac8b699c40f3161.scope: Deactivated successfully.
Nov 29 05:20:55 compute-0 podman[170587]: 2025-11-29 05:20:55.288547689 +0000 UTC m=+0.068112301 container create eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 05:20:55 compute-0 systemd[1]: Started libpod-conmon-eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0.scope.
Nov 29 05:20:55 compute-0 sudo[170652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyqkeuejfrvyufzjsynbnkjdypklbqjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393654.840635-305-63925590507553/AnsiballZ_group.py'
Nov 29 05:20:55 compute-0 sudo[170652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:55 compute-0 podman[170587]: 2025-11-29 05:20:55.258509694 +0000 UTC m=+0.038074346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:20:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:55 compute-0 podman[170587]: 2025-11-29 05:20:55.429816875 +0000 UTC m=+0.209381537 container init eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 05:20:55 compute-0 podman[170587]: 2025-11-29 05:20:55.439114246 +0000 UTC m=+0.218678838 container start eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 05:20:55 compute-0 podman[170587]: 2025-11-29 05:20:55.442379537 +0000 UTC m=+0.221944189 container attach eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 05:20:55 compute-0 python3.9[170656]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 05:20:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:55 compute-0 groupadd[170660]: group added to /etc/group: name=libvirt, GID=42473
Nov 29 05:20:55 compute-0 groupadd[170660]: group added to /etc/gshadow: name=libvirt
Nov 29 05:20:55 compute-0 groupadd[170660]: new group: name=libvirt, GID=42473
Nov 29 05:20:55 compute-0 sudo[170652]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:56 compute-0 sudo[170837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iybkqziscamjphfuqjitotipxzrykgyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393655.9603918-313-92590856262018/AnsiballZ_user.py'
Nov 29 05:20:56 compute-0 sudo[170837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:56 compute-0 agitated_nash[170654]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:20:56 compute-0 agitated_nash[170654]: --> relative data size: 1.0
Nov 29 05:20:56 compute-0 agitated_nash[170654]: --> All data devices are unavailable
Nov 29 05:20:56 compute-0 systemd[1]: libpod-eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0.scope: Deactivated successfully.
Nov 29 05:20:56 compute-0 podman[170587]: 2025-11-29 05:20:56.585329832 +0000 UTC m=+1.364894424 container died eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:20:56 compute-0 systemd[1]: libpod-eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0.scope: Consumed 1.074s CPU time.
Nov 29 05:20:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-06c1136ce99e168bab0af6183a6f8d0ce5978d49c51be2370dc64646ba570cd8-merged.mount: Deactivated successfully.
Nov 29 05:20:56 compute-0 podman[170587]: 2025-11-29 05:20:56.645990917 +0000 UTC m=+1.425555499 container remove eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_nash, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:20:56 compute-0 systemd[1]: libpod-conmon-eaf9b19fe88dc9efa3c0c04e5cf213684d9d2cf45d46fd52dd42639cd4254bf0.scope: Deactivated successfully.
Nov 29 05:20:56 compute-0 ceph-mon[75176]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:20:56 compute-0 sudo[170378]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:56 compute-0 sudo[170853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:20:56 compute-0 sudo[170853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:56 compute-0 sudo[170853]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:56 compute-0 python3.9[170840]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 05:20:56 compute-0 sudo[170878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:20:56 compute-0 useradd[170902]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 29 05:20:56 compute-0 sudo[170878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:56 compute-0 sudo[170878]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:56 compute-0 sudo[170905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:20:56 compute-0 sudo[170905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:56 compute-0 sudo[170905]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:56 compute-0 sudo[170837]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:56 compute-0 sudo[170936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:20:56 compute-0 sudo[170936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:57 compute-0 podman[171026]: 2025-11-29 05:20:57.198776426 +0000 UTC m=+0.060239976 container create ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:20:57 compute-0 systemd[1]: Started libpod-conmon-ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f.scope.
Nov 29 05:20:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:20:57 compute-0 podman[171026]: 2025-11-29 05:20:57.176517133 +0000 UTC m=+0.037980693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:20:57 compute-0 podman[171026]: 2025-11-29 05:20:57.289904307 +0000 UTC m=+0.151367897 container init ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:20:57 compute-0 podman[171026]: 2025-11-29 05:20:57.296238534 +0000 UTC m=+0.157702074 container start ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 05:20:57 compute-0 podman[171026]: 2025-11-29 05:20:57.300344506 +0000 UTC m=+0.161808106 container attach ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:20:57 compute-0 silly_tesla[171079]: 167 167
Nov 29 05:20:57 compute-0 systemd[1]: libpod-ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f.scope: Deactivated successfully.
Nov 29 05:20:57 compute-0 podman[171026]: 2025-11-29 05:20:57.303898175 +0000 UTC m=+0.165361725 container died ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 29 05:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-75f33c0b59e64aed51382710a53996757748cc5eb0ba149d40051e729129cc26-merged.mount: Deactivated successfully.
Nov 29 05:20:57 compute-0 podman[171026]: 2025-11-29 05:20:57.351994958 +0000 UTC m=+0.213458468 container remove ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:20:57 compute-0 systemd[1]: libpod-conmon-ddd2e3852980ece88ac0aaf31180d97f7e80910c4c96651aa1b15f18bebd161f.scope: Deactivated successfully.
Nov 29 05:20:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:57 compute-0 podman[171166]: 2025-11-29 05:20:57.541272375 +0000 UTC m=+0.039765397 container create f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:20:57 compute-0 systemd[1]: Started libpod-conmon-f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e.scope.
Nov 29 05:20:57 compute-0 sudo[171205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxmvgstvvxsdkhmixywfepddyimhydtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393657.1928203-324-78797299905908/AnsiballZ_setup.py'
Nov 29 05:20:57 compute-0 sudo[171205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657adab5b3c63eb04dfb001573ae7abb8c7679a68865760c550d367a8c5c538d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657adab5b3c63eb04dfb001573ae7abb8c7679a68865760c550d367a8c5c538d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657adab5b3c63eb04dfb001573ae7abb8c7679a68865760c550d367a8c5c538d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657adab5b3c63eb04dfb001573ae7abb8c7679a68865760c550d367a8c5c538d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:57 compute-0 podman[171166]: 2025-11-29 05:20:57.524661373 +0000 UTC m=+0.023154415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:20:57 compute-0 podman[171166]: 2025-11-29 05:20:57.627367982 +0000 UTC m=+0.125861004 container init f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 05:20:57 compute-0 podman[171166]: 2025-11-29 05:20:57.636516169 +0000 UTC m=+0.135009181 container start f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:20:57 compute-0 podman[171166]: 2025-11-29 05:20:57.639589046 +0000 UTC m=+0.138082068 container attach f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:20:57 compute-0 python3.9[171211]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:20:58 compute-0 sudo[171205]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]: {
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:     "0": [
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:         {
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "devices": [
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "/dev/loop3"
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             ],
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_name": "ceph_lv0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_size": "21470642176",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "name": "ceph_lv0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "tags": {
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.cluster_name": "ceph",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.crush_device_class": "",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.encrypted": "0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.osd_id": "0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.type": "block",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.vdo": "0"
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             },
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "type": "block",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "vg_name": "ceph_vg0"
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:         }
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:     ],
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:     "1": [
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:         {
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "devices": [
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "/dev/loop4"
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             ],
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_name": "ceph_lv1",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_size": "21470642176",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "name": "ceph_lv1",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "tags": {
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.cluster_name": "ceph",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.crush_device_class": "",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.encrypted": "0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.osd_id": "1",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.type": "block",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.vdo": "0"
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             },
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "type": "block",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "vg_name": "ceph_vg1"
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:         }
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:     ],
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:     "2": [
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:         {
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "devices": [
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "/dev/loop5"
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             ],
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_name": "ceph_lv2",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_size": "21470642176",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "name": "ceph_lv2",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "tags": {
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.cluster_name": "ceph",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.crush_device_class": "",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.encrypted": "0",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.osd_id": "2",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.type": "block",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:                 "ceph.vdo": "0"
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             },
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "type": "block",
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:             "vg_name": "ceph_vg2"
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:         }
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]:     ]
Nov 29 05:20:58 compute-0 eloquent_lalande[171209]: }
Nov 29 05:20:58 compute-0 systemd[1]: libpod-f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e.scope: Deactivated successfully.
Nov 29 05:20:58 compute-0 podman[171227]: 2025-11-29 05:20:58.461279777 +0000 UTC m=+0.026595141 container died f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:20:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-657adab5b3c63eb04dfb001573ae7abb8c7679a68865760c550d367a8c5c538d-merged.mount: Deactivated successfully.
Nov 29 05:20:58 compute-0 podman[171227]: 2025-11-29 05:20:58.510728345 +0000 UTC m=+0.076043729 container remove f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_lalande, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:20:58 compute-0 systemd[1]: libpod-conmon-f33f86dae5a346265df653439e8703286647f2f0767c2938ead93443b246162e.scope: Deactivated successfully.
Nov 29 05:20:58 compute-0 sudo[170936]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:58 compute-0 sudo[171265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:20:58 compute-0 sudo[171265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:58 compute-0 sudo[171265]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:58 compute-0 ceph-mon[75176]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:58 compute-0 sudo[171314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:20:58 compute-0 sudo[171314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:58 compute-0 sudo[171363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udjmkeafjykrxdnltaosytlkyspkojsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393657.1928203-324-78797299905908/AnsiballZ_dnf.py'
Nov 29 05:20:58 compute-0 sudo[171314]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:58 compute-0 sudo[171363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:20:58 compute-0 sudo[171368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:20:58 compute-0 sudo[171368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:58 compute-0 sudo[171368]: pam_unix(sudo:session): session closed for user root
Nov 29 05:20:58 compute-0 sudo[171393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:20:58 compute-0 sudo[171393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:20:58 compute-0 python3.9[171367]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:20:59 compute-0 podman[171461]: 2025-11-29 05:20:59.227769349 +0000 UTC m=+0.057177130 container create 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:20:59 compute-0 systemd[1]: Started libpod-conmon-6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f.scope.
Nov 29 05:20:59 compute-0 podman[171461]: 2025-11-29 05:20:59.197644371 +0000 UTC m=+0.027052242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:20:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:20:59 compute-0 podman[171461]: 2025-11-29 05:20:59.320625723 +0000 UTC m=+0.150033534 container init 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:20:59 compute-0 podman[171461]: 2025-11-29 05:20:59.331504904 +0000 UTC m=+0.160912725 container start 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:20:59 compute-0 podman[171461]: 2025-11-29 05:20:59.33501965 +0000 UTC m=+0.164427441 container attach 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:20:59 compute-0 cranky_meitner[171478]: 167 167
Nov 29 05:20:59 compute-0 systemd[1]: libpod-6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f.scope: Deactivated successfully.
Nov 29 05:20:59 compute-0 podman[171461]: 2025-11-29 05:20:59.338836685 +0000 UTC m=+0.168244476 container died 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:20:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6c5b35713fb8047ddb79fd5d3f93f0a4b04beb8c60d479c572064a8db425fe8-merged.mount: Deactivated successfully.
Nov 29 05:20:59 compute-0 podman[171461]: 2025-11-29 05:20:59.391918323 +0000 UTC m=+0.221326104 container remove 6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meitner, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:20:59 compute-0 systemd[1]: libpod-conmon-6e24867731443adc3a70fc5263b926d122128140c12b3c29b91cc98215a3b01f.scope: Deactivated successfully.
Nov 29 05:20:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:20:59 compute-0 podman[171501]: 2025-11-29 05:20:59.591340532 +0000 UTC m=+0.041885260 container create acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:20:59 compute-0 systemd[1]: Started libpod-conmon-acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed.scope.
Nov 29 05:20:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0952198dc655e2d7fb22fff7939fb1f442106c0e6a7eed54b52b1dda20dcaa8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0952198dc655e2d7fb22fff7939fb1f442106c0e6a7eed54b52b1dda20dcaa8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0952198dc655e2d7fb22fff7939fb1f442106c0e6a7eed54b52b1dda20dcaa8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0952198dc655e2d7fb22fff7939fb1f442106c0e6a7eed54b52b1dda20dcaa8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:20:59 compute-0 podman[171501]: 2025-11-29 05:20:59.576537775 +0000 UTC m=+0.027082523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:20:59 compute-0 podman[171501]: 2025-11-29 05:20:59.689172779 +0000 UTC m=+0.139717557 container init acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:20:59 compute-0 podman[171501]: 2025-11-29 05:20:59.697923877 +0000 UTC m=+0.148468645 container start acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:20:59 compute-0 podman[171501]: 2025-11-29 05:20:59.703888765 +0000 UTC m=+0.154433523 container attach acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:21:00 compute-0 ceph-mon[75176]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]: {
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "osd_id": 0,
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "type": "bluestore"
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:     },
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "osd_id": 1,
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "type": "bluestore"
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:     },
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "osd_id": 2,
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:         "type": "bluestore"
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]:     }
Nov 29 05:21:00 compute-0 flamboyant_lehmann[171517]: }
Nov 29 05:21:00 compute-0 systemd[1]: libpod-acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed.scope: Deactivated successfully.
Nov 29 05:21:00 compute-0 systemd[1]: libpod-acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed.scope: Consumed 1.117s CPU time.
Nov 29 05:21:00 compute-0 podman[171554]: 2025-11-29 05:21:00.874187119 +0000 UTC m=+0.044647560 container died acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:21:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0952198dc655e2d7fb22fff7939fb1f442106c0e6a7eed54b52b1dda20dcaa8d-merged.mount: Deactivated successfully.
Nov 29 05:21:00 compute-0 podman[171554]: 2025-11-29 05:21:00.940128335 +0000 UTC m=+0.110588746 container remove acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lehmann, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:21:00 compute-0 systemd[1]: libpod-conmon-acb64df4081b0a306a77be30df219a3cfee56083de3c458b28883d9a231bd6ed.scope: Deactivated successfully.
Nov 29 05:21:00 compute-0 sudo[171393]: pam_unix(sudo:session): session closed for user root
Nov 29 05:21:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:21:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:21:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:21:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:21:01 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev f03ab662-4ea3-4469-9f2b-02bd4e4ca606 does not exist
Nov 29 05:21:01 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev e52773d9-7216-48f3-b64e-6e995ef363af does not exist
Nov 29 05:21:01 compute-0 sudo[171571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:21:01 compute-0 sudo[171571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:21:01 compute-0 sudo[171571]: pam_unix(sudo:session): session closed for user root
Nov 29 05:21:01 compute-0 sudo[171599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:21:01 compute-0 sudo[171599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:21:01 compute-0 sudo[171599]: pam_unix(sudo:session): session closed for user root
Nov 29 05:21:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:21:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:21:03 compute-0 ceph-mon[75176]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:04 compute-0 podman[171631]: 2025-11-29 05:21:04.088693585 +0000 UTC m=+0.143821412 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 05:21:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:05 compute-0 ceph-mon[75176]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:07 compute-0 ceph-mon[75176]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:09 compute-0 ceph-mon[75176]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:11 compute-0 ceph-mon[75176]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:21:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:21:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:21:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:21:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:21:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:21:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:13 compute-0 podman[171830]: 2025-11-29 05:21:13.019742225 +0000 UTC m=+0.071562607 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 05:21:13 compute-0 ceph-mon[75176]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:21:13.730 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:21:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:21:13.731 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:21:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:21:13.731 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:21:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:14 compute-0 sshd-session[171849]: Invalid user frappe from 152.32.145.111 port 46794
Nov 29 05:21:14 compute-0 sshd-session[171849]: Received disconnect from 152.32.145.111 port 46794:11: Bye Bye [preauth]
Nov 29 05:21:14 compute-0 sshd-session[171849]: Disconnected from invalid user frappe 152.32.145.111 port 46794 [preauth]
Nov 29 05:21:15 compute-0 ceph-mon[75176]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:15 compute-0 sshd-session[171851]: Invalid user tony from 45.120.216.232 port 49848
Nov 29 05:21:16 compute-0 sshd-session[171851]: Received disconnect from 45.120.216.232 port 49848:11: Bye Bye [preauth]
Nov 29 05:21:16 compute-0 sshd-session[171851]: Disconnected from invalid user tony 45.120.216.232 port 49848 [preauth]
Nov 29 05:21:17 compute-0 ceph-mon[75176]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:19 compute-0 ceph-mon[75176]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:21 compute-0 ceph-mon[75176]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:23 compute-0 ceph-mon[75176]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:25 compute-0 ceph-mon[75176]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:25 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Nov 29 05:21:25 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 05:21:25 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 05:21:25 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 05:21:25 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 05:21:25 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 05:21:25 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 05:21:25 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 05:21:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:27 compute-0 ceph-mon[75176]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:29 compute-0 ceph-mon[75176]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:31 compute-0 ceph-mon[75176]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:33 compute-0 ceph-mon[75176]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:34 compute-0 ceph-mon[75176]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:34 compute-0 kernel: SELinux:  Converting 2768 SID table entries...
Nov 29 05:21:34 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 05:21:34 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 05:21:34 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 05:21:34 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 05:21:34 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 05:21:34 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 05:21:34 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 05:21:34 compute-0 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 05:21:35 compute-0 podman[171875]: 2025-11-29 05:21:35.049038586 +0000 UTC m=+0.092089554 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 05:21:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:36 compute-0 ceph-mon[75176]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:38 compute-0 ceph-mon[75176]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:40 compute-0 ceph-mon[75176]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:21:41
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta']
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:21:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:42 compute-0 ceph-mon[75176]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:44 compute-0 podman[171901]: 2025-11-29 05:21:44.064538348 +0000 UTC m=+0.106849485 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 05:21:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:44 compute-0 ceph-mon[75176]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:44 compute-0 sshd-session[171874]: error: kex_exchange_identification: read: Connection timed out
Nov 29 05:21:44 compute-0 sshd-session[171874]: banner exchange: Connection from 106.12.151.247 port 37280: Connection timed out
Nov 29 05:21:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:46 compute-0 ceph-mon[75176]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:48 compute-0 ceph-mon[75176]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:50 compute-0 ceph-mon[75176]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:21:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:52 compute-0 ceph-mon[75176]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:55 compute-0 ceph-mon[75176]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:56 compute-0 ceph-mon[75176]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:58 compute-0 ceph-mon[75176]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:21:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:21:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:00 compute-0 ceph-mon[75176]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:01 compute-0 sudo[179291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:22:01 compute-0 sudo[179291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:01 compute-0 sudo[179291]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:01 compute-0 sudo[179360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:22:01 compute-0 sudo[179360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:01 compute-0 sudo[179360]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:01 compute-0 sudo[179428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:22:01 compute-0 sudo[179428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:01 compute-0 sudo[179428]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:01 compute-0 sudo[179490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:22:01 compute-0 sudo[179490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:02 compute-0 sudo[179490]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:22:02 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:22:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:22:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:22:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:22:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:22:02 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev aeed7bd5-9677-44ac-98e7-a18dff541527 does not exist
Nov 29 05:22:02 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 1f48276e-5066-4a59-a491-501557c76f00 does not exist
Nov 29 05:22:02 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a5e4cbbe-c04b-4d04-b54f-5aea4935df6c does not exist
Nov 29 05:22:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:22:02 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:22:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:22:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:22:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:22:02 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:22:02 compute-0 sudo[179829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:22:02 compute-0 sudo[179829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:02 compute-0 sudo[179829]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:02 compute-0 sudo[179897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:22:02 compute-0 sudo[179897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:02 compute-0 sudo[179897]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:02 compute-0 sudo[179964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:22:02 compute-0 sudo[179964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:02 compute-0 sudo[179964]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:02 compute-0 sudo[180033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:22:02 compute-0 sudo[180033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:02 compute-0 ceph-mon[75176]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:22:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:22:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:22:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:22:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:22:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:22:02 compute-0 podman[180303]: 2025-11-29 05:22:02.793801459 +0000 UTC m=+0.053265510 container create 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:22:02 compute-0 systemd[1]: Started libpod-conmon-4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89.scope.
Nov 29 05:22:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:22:02 compute-0 podman[180303]: 2025-11-29 05:22:02.767855858 +0000 UTC m=+0.027319989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:22:02 compute-0 podman[180303]: 2025-11-29 05:22:02.879121994 +0000 UTC m=+0.138586065 container init 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:22:02 compute-0 podman[180303]: 2025-11-29 05:22:02.885518004 +0000 UTC m=+0.144982055 container start 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 05:22:02 compute-0 podman[180303]: 2025-11-29 05:22:02.888806152 +0000 UTC m=+0.148270213 container attach 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:22:02 compute-0 musing_darwin[180384]: 167 167
Nov 29 05:22:02 compute-0 systemd[1]: libpod-4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89.scope: Deactivated successfully.
Nov 29 05:22:02 compute-0 podman[180433]: 2025-11-29 05:22:02.928148936 +0000 UTC m=+0.023603794 container died 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:22:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-931551e1b7ba0a0bb519beebb07ccffc7473b381d9f0d7b5ae16988c3506077c-merged.mount: Deactivated successfully.
Nov 29 05:22:02 compute-0 podman[180433]: 2025-11-29 05:22:02.967708155 +0000 UTC m=+0.063163013 container remove 4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:22:02 compute-0 systemd[1]: libpod-conmon-4c5b767fad6c85aeea49ab6bbf8053af67bbd063dcc97cf76540443f913f5b89.scope: Deactivated successfully.
Nov 29 05:22:03 compute-0 podman[180566]: 2025-11-29 05:22:03.143701303 +0000 UTC m=+0.047501892 container create d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 05:22:03 compute-0 systemd[1]: Started libpod-conmon-d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a.scope.
Nov 29 05:22:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:03 compute-0 podman[180566]: 2025-11-29 05:22:03.213586622 +0000 UTC m=+0.117387281 container init d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:22:03 compute-0 podman[180566]: 2025-11-29 05:22:03.120421947 +0000 UTC m=+0.024222566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:22:03 compute-0 podman[180566]: 2025-11-29 05:22:03.228548017 +0000 UTC m=+0.132348636 container start d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:22:03 compute-0 podman[180566]: 2025-11-29 05:22:03.232624501 +0000 UTC m=+0.136425120 container attach d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:22:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:04 compute-0 nostalgic_beaver[180636]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:22:04 compute-0 nostalgic_beaver[180636]: --> relative data size: 1.0
Nov 29 05:22:04 compute-0 nostalgic_beaver[180636]: --> All data devices are unavailable
Nov 29 05:22:04 compute-0 podman[180566]: 2025-11-29 05:22:04.281126288 +0000 UTC m=+1.184926887 container died d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:22:04 compute-0 systemd[1]: libpod-d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a.scope: Deactivated successfully.
Nov 29 05:22:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcebc62181dcba12feac81b2401920115d8f870c8944a3fc0e318908268ac691-merged.mount: Deactivated successfully.
Nov 29 05:22:04 compute-0 podman[180566]: 2025-11-29 05:22:04.334188253 +0000 UTC m=+1.237988832 container remove d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:22:04 compute-0 systemd[1]: libpod-conmon-d7cb8a1bda9ebf75a4f23e15cfa256c59f708be6e1d3064687d9239ff2fd3c3a.scope: Deactivated successfully.
Nov 29 05:22:04 compute-0 sudo[180033]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:04 compute-0 sudo[181318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:22:04 compute-0 sudo[181318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:04 compute-0 sudo[181318]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:04 compute-0 sudo[181394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:22:04 compute-0 sudo[181394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:04 compute-0 sudo[181394]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:04 compute-0 sudo[181460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:22:04 compute-0 sudo[181460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:04 compute-0 sudo[181460]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:04 compute-0 ceph-mon[75176]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:04 compute-0 sudo[181523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:22:04 compute-0 sudo[181523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:05 compute-0 podman[181812]: 2025-11-29 05:22:05.002140159 +0000 UTC m=+0.091441620 container create 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:22:05 compute-0 podman[181812]: 2025-11-29 05:22:04.932300231 +0000 UTC m=+0.021601672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:22:05 compute-0 systemd[1]: Started libpod-conmon-990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1.scope.
Nov 29 05:22:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:22:05 compute-0 podman[181812]: 2025-11-29 05:22:05.087736759 +0000 UTC m=+0.177038200 container init 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:22:05 compute-0 podman[181812]: 2025-11-29 05:22:05.098948148 +0000 UTC m=+0.188249579 container start 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 05:22:05 compute-0 podman[181812]: 2025-11-29 05:22:05.102321857 +0000 UTC m=+0.191623278 container attach 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:22:05 compute-0 dazzling_mendeleev[181917]: 167 167
Nov 29 05:22:05 compute-0 systemd[1]: libpod-990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1.scope: Deactivated successfully.
Nov 29 05:22:05 compute-0 podman[181812]: 2025-11-29 05:22:05.108584075 +0000 UTC m=+0.197885506 container died 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:22:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3bd2428c14f1ab2c9a7ff89acd36600c1014faec3c6d66ac8f7e15047bb78f6-merged.mount: Deactivated successfully.
Nov 29 05:22:05 compute-0 podman[181812]: 2025-11-29 05:22:05.148522902 +0000 UTC m=+0.237824323 container remove 990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mendeleev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:22:05 compute-0 systemd[1]: libpod-conmon-990fd4a81f738f2e120bab4d47974c6d533d394ffef6059ad0b63efa4531d3b1.scope: Deactivated successfully.
Nov 29 05:22:05 compute-0 podman[181939]: 2025-11-29 05:22:05.222154377 +0000 UTC m=+0.124308113 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:22:05 compute-0 podman[182091]: 2025-11-29 05:22:05.320848385 +0000 UTC m=+0.038774454 container create 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 05:22:05 compute-0 systemd[1]: Started libpod-conmon-21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819.scope.
Nov 29 05:22:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb62b452809f5f596bea1a7808930132f71e0b06790f152b9a4d45f5001256f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb62b452809f5f596bea1a7808930132f71e0b06790f152b9a4d45f5001256f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb62b452809f5f596bea1a7808930132f71e0b06790f152b9a4d45f5001256f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb62b452809f5f596bea1a7808930132f71e0b06790f152b9a4d45f5001256f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:05 compute-0 podman[182091]: 2025-11-29 05:22:05.303773145 +0000 UTC m=+0.021699234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:22:05 compute-0 podman[182091]: 2025-11-29 05:22:05.408823763 +0000 UTC m=+0.126749852 container init 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:22:05 compute-0 podman[182091]: 2025-11-29 05:22:05.417569803 +0000 UTC m=+0.135495872 container start 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:22:05 compute-0 podman[182091]: 2025-11-29 05:22:05.42090294 +0000 UTC m=+0.138829009 container attach 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:22:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]: {
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:     "0": [
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:         {
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "devices": [
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "/dev/loop3"
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             ],
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_name": "ceph_lv0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_size": "21470642176",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "name": "ceph_lv0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "tags": {
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.cluster_name": "ceph",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.crush_device_class": "",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.encrypted": "0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.osd_id": "0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.type": "block",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.vdo": "0"
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             },
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "type": "block",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "vg_name": "ceph_vg0"
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:         }
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:     ],
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:     "1": [
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:         {
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "devices": [
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "/dev/loop4"
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             ],
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_name": "ceph_lv1",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_size": "21470642176",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "name": "ceph_lv1",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "tags": {
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.cluster_name": "ceph",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.crush_device_class": "",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.encrypted": "0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.osd_id": "1",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.type": "block",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.vdo": "0"
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             },
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "type": "block",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "vg_name": "ceph_vg1"
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:         }
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:     ],
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:     "2": [
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:         {
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "devices": [
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "/dev/loop5"
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             ],
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_name": "ceph_lv2",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_size": "21470642176",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "name": "ceph_lv2",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "tags": {
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.cluster_name": "ceph",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.crush_device_class": "",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.encrypted": "0",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.osd_id": "2",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.type": "block",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:                 "ceph.vdo": "0"
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             },
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "type": "block",
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:             "vg_name": "ceph_vg2"
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:         }
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]:     ]
Nov 29 05:22:06 compute-0 priceless_ardinghelli[182174]: }
Nov 29 05:22:06 compute-0 systemd[1]: libpod-21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819.scope: Deactivated successfully.
Nov 29 05:22:06 compute-0 podman[182091]: 2025-11-29 05:22:06.166250288 +0000 UTC m=+0.884176367 container died 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 05:22:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceb62b452809f5f596bea1a7808930132f71e0b06790f152b9a4d45f5001256f-merged.mount: Deactivated successfully.
Nov 29 05:22:06 compute-0 podman[182091]: 2025-11-29 05:22:06.239611519 +0000 UTC m=+0.957537588 container remove 21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 05:22:06 compute-0 systemd[1]: libpod-conmon-21ffb3e38944d7ae53daf858a8e2dc9b0a99873f1de7670c5a666818ba368819.scope: Deactivated successfully.
Nov 29 05:22:06 compute-0 sudo[181523]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:06 compute-0 sshd-session[180981]: Invalid user ventas01 from 101.47.141.125 port 46900
Nov 29 05:22:06 compute-0 sudo[182703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:22:06 compute-0 sudo[182703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:06 compute-0 sudo[182703]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:06 compute-0 sudo[182778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:22:06 compute-0 sudo[182778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:06 compute-0 sudo[182778]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:06 compute-0 sudo[182848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:22:06 compute-0 sudo[182848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:06 compute-0 sudo[182848]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:06 compute-0 sudo[182918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:22:06 compute-0 sudo[182918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:06 compute-0 ceph-mon[75176]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:06 compute-0 podman[183177]: 2025-11-29 05:22:06.935350503 +0000 UTC m=+0.042160352 container create f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:22:06 compute-0 systemd[1]: Started libpod-conmon-f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0.scope.
Nov 29 05:22:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:22:07 compute-0 podman[183177]: 2025-11-29 05:22:06.914397465 +0000 UTC m=+0.021207364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:22:07 compute-0 podman[183177]: 2025-11-29 05:22:07.02568435 +0000 UTC m=+0.132494299 container init f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:22:07 compute-0 podman[183177]: 2025-11-29 05:22:07.031913238 +0000 UTC m=+0.138723107 container start f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:22:07 compute-0 jolly_merkle[183247]: 167 167
Nov 29 05:22:07 compute-0 systemd[1]: libpod-f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0.scope: Deactivated successfully.
Nov 29 05:22:07 compute-0 podman[183177]: 2025-11-29 05:22:07.036592863 +0000 UTC m=+0.143402762 container attach f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:22:07 compute-0 podman[183177]: 2025-11-29 05:22:07.036975481 +0000 UTC m=+0.143785420 container died f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:22:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e6b75451c2a85fb471a1dc7138a518059634e44f9653ffd7b3d8530958e2f63-merged.mount: Deactivated successfully.
Nov 29 05:22:07 compute-0 podman[183177]: 2025-11-29 05:22:07.085283519 +0000 UTC m=+0.192093378 container remove f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:22:07 compute-0 systemd[1]: libpod-conmon-f4b21f6c652cb69b4a3418a79a36e4cec69123042243be209a80a74ba90370e0.scope: Deactivated successfully.
Nov 29 05:22:07 compute-0 podman[183366]: 2025-11-29 05:22:07.275418386 +0000 UTC m=+0.049329230 container create 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:22:07 compute-0 systemd[1]: Started libpod-conmon-4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30.scope.
Nov 29 05:22:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace02b478db82e528df03d92987cd13ad78459690e363e9e906e104aeed2d13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:07 compute-0 podman[183366]: 2025-11-29 05:22:07.255996889 +0000 UTC m=+0.029907773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace02b478db82e528df03d92987cd13ad78459690e363e9e906e104aeed2d13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace02b478db82e528df03d92987cd13ad78459690e363e9e906e104aeed2d13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eace02b478db82e528df03d92987cd13ad78459690e363e9e906e104aeed2d13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:22:07 compute-0 podman[183366]: 2025-11-29 05:22:07.363153969 +0000 UTC m=+0.137064893 container init 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:22:07 compute-0 podman[183366]: 2025-11-29 05:22:07.369132562 +0000 UTC m=+0.143043406 container start 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:22:07 compute-0 podman[183366]: 2025-11-29 05:22:07.373383489 +0000 UTC m=+0.147294353 container attach 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:22:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:08 compute-0 goofy_banzai[183448]: {
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "osd_id": 0,
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "type": "bluestore"
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:     },
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "osd_id": 1,
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "type": "bluestore"
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:     },
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "osd_id": 2,
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:         "type": "bluestore"
Nov 29 05:22:08 compute-0 goofy_banzai[183448]:     }
Nov 29 05:22:08 compute-0 goofy_banzai[183448]: }
Nov 29 05:22:08 compute-0 systemd[1]: libpod-4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30.scope: Deactivated successfully.
Nov 29 05:22:08 compute-0 podman[183366]: 2025-11-29 05:22:08.423914497 +0000 UTC m=+1.197825361 container died 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:22:08 compute-0 systemd[1]: libpod-4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30.scope: Consumed 1.058s CPU time.
Nov 29 05:22:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-eace02b478db82e528df03d92987cd13ad78459690e363e9e906e104aeed2d13-merged.mount: Deactivated successfully.
Nov 29 05:22:08 compute-0 podman[183366]: 2025-11-29 05:22:08.499767108 +0000 UTC m=+1.273677972 container remove 4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:22:08 compute-0 systemd[1]: libpod-conmon-4d05f6a7fb13ba31eaa7d66374cf4381c98e55ee87624a2d436fd773514cbd30.scope: Deactivated successfully.
Nov 29 05:22:08 compute-0 sudo[182918]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:22:08 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:22:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:22:08 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:22:08 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 166ee078-1135-4ebd-8e9f-0ff7677fed31 does not exist
Nov 29 05:22:08 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 0bb85cdb-3b09-40f1-af24-4c120a096a45 does not exist
Nov 29 05:22:08 compute-0 sudo[184145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:22:08 compute-0 sudo[184145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:08 compute-0 sudo[184145]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:08 compute-0 ceph-mon[75176]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:22:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:22:08 compute-0 sudo[184208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:22:08 compute-0 sudo[184208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:22:08 compute-0 sudo[184208]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:10 compute-0 ceph-mon[75176]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:22:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:22:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:22:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:22:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:22:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:22:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:12 compute-0 ceph-mon[75176]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:22:13.731 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:22:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:22:13.732 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:22:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:22:13.732 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:22:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:14 compute-0 ceph-mon[75176]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:14 compute-0 podman[187181]: 2025-11-29 05:22:14.989745556 +0000 UTC m=+0.046276527 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:22:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:16 compute-0 ceph-mon[75176]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:18 compute-0 ceph-mon[75176]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:20 compute-0 ceph-mon[75176]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:22 compute-0 ceph-mon[75176]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.674509) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743674549, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 251, "total_data_size": 3532468, "memory_usage": 3594216, "flush_reason": "Manual Compaction"}
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743703731, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3446643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9697, "largest_seqno": 11740, "table_properties": {"data_size": 3437348, "index_size": 5917, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17868, "raw_average_key_size": 19, "raw_value_size": 3418938, "raw_average_value_size": 3724, "num_data_blocks": 269, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764393513, "oldest_key_time": 1764393513, "file_creation_time": 1764393743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 29286 microseconds, and 11618 cpu microseconds.
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.703793) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3446643 bytes OK
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.703815) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.705464) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.705487) EVENT_LOG_v1 {"time_micros": 1764393743705479, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.705509) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3523933, prev total WAL file size 3523933, number of live WAL files 2.
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.707185) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3365KB)], [26(5930KB)]
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743707295, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9519243, "oldest_snapshot_seqno": -1}
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3693 keys, 7908377 bytes, temperature: kUnknown
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743760899, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7908377, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7879989, "index_size": 18038, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88684, "raw_average_key_size": 24, "raw_value_size": 7809623, "raw_average_value_size": 2114, "num_data_blocks": 782, "num_entries": 3693, "num_filter_entries": 3693, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764393743, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.761188) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7908377 bytes
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.763326) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.3 rd, 147.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4207, records dropped: 514 output_compression: NoCompression
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.763360) EVENT_LOG_v1 {"time_micros": 1764393743763344, "job": 10, "event": "compaction_finished", "compaction_time_micros": 53688, "compaction_time_cpu_micros": 23687, "output_level": 6, "num_output_files": 1, "total_output_size": 7908377, "num_input_records": 4207, "num_output_records": 3693, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743764640, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393743766545, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.707062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.766653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.766661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.766664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.766667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:22:23 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:22:23.766670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:22:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:24 compute-0 ceph-mon[75176]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:26 compute-0 ceph-mon[75176]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:28 compute-0 ceph-mon[75176]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:30 compute-0 ceph-mon[75176]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:31 compute-0 sshd-session[189647]: Invalid user odin from 45.120.216.232 port 48740
Nov 29 05:22:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:31 compute-0 sshd-session[189647]: Received disconnect from 45.120.216.232 port 48740:11: Bye Bye [preauth]
Nov 29 05:22:31 compute-0 sshd-session[189647]: Disconnected from invalid user odin 45.120.216.232 port 48740 [preauth]
Nov 29 05:22:32 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 29 05:22:32 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 05:22:32 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 29 05:22:32 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 05:22:32 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 29 05:22:32 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 05:22:32 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 05:22:32 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 05:22:32 compute-0 ceph-mon[75176]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:33 compute-0 groupadd[189662]: group added to /etc/group: name=dnsmasq, GID=991
Nov 29 05:22:33 compute-0 groupadd[189662]: group added to /etc/gshadow: name=dnsmasq
Nov 29 05:22:33 compute-0 groupadd[189662]: new group: name=dnsmasq, GID=991
Nov 29 05:22:33 compute-0 useradd[189669]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 29 05:22:33 compute-0 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 05:22:33 compute-0 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 05:22:33 compute-0 dbus-broker-launch[743]: Noticed file-system modification, trigger reload.
Nov 29 05:22:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:34 compute-0 groupadd[189682]: group added to /etc/group: name=clevis, GID=990
Nov 29 05:22:34 compute-0 groupadd[189682]: group added to /etc/gshadow: name=clevis
Nov 29 05:22:34 compute-0 groupadd[189682]: new group: name=clevis, GID=990
Nov 29 05:22:34 compute-0 useradd[189689]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 29 05:22:34 compute-0 usermod[189699]: add 'clevis' to group 'tss'
Nov 29 05:22:34 compute-0 usermod[189699]: add 'clevis' to shadow group 'tss'
Nov 29 05:22:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:34 compute-0 ceph-mon[75176]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:36 compute-0 podman[189722]: 2025-11-29 05:22:36.13923098 +0000 UTC m=+0.159435015 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:22:36 compute-0 polkitd[43510]: Reloading rules
Nov 29 05:22:36 compute-0 polkitd[43510]: Collecting garbage unconditionally...
Nov 29 05:22:36 compute-0 polkitd[43510]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 05:22:36 compute-0 polkitd[43510]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 05:22:36 compute-0 polkitd[43510]: Finished loading, compiling and executing 3 rules
Nov 29 05:22:36 compute-0 polkitd[43510]: Reloading rules
Nov 29 05:22:36 compute-0 polkitd[43510]: Collecting garbage unconditionally...
Nov 29 05:22:36 compute-0 polkitd[43510]: Loading rules from directory /etc/polkit-1/rules.d
Nov 29 05:22:36 compute-0 polkitd[43510]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 29 05:22:36 compute-0 polkitd[43510]: Finished loading, compiling and executing 3 rules
Nov 29 05:22:36 compute-0 sshd-session[189720]: Invalid user david from 152.32.145.111 port 37042
Nov 29 05:22:36 compute-0 ceph-mon[75176]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:36 compute-0 sshd-session[189720]: Received disconnect from 152.32.145.111 port 37042:11: Bye Bye [preauth]
Nov 29 05:22:36 compute-0 sshd-session[189720]: Disconnected from invalid user david 152.32.145.111 port 37042 [preauth]
Nov 29 05:22:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:37 compute-0 groupadd[189914]: group added to /etc/group: name=ceph, GID=167
Nov 29 05:22:37 compute-0 groupadd[189914]: group added to /etc/gshadow: name=ceph
Nov 29 05:22:37 compute-0 groupadd[189914]: new group: name=ceph, GID=167
Nov 29 05:22:37 compute-0 useradd[189920]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 29 05:22:38 compute-0 ceph-mon[75176]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:40 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 05:22:40 compute-0 sshd[1004]: Received signal 15; terminating.
Nov 29 05:22:40 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 05:22:40 compute-0 systemd[1]: sshd.service: Unit process 180981 (sshd-session) remains running after unit stopped.
Nov 29 05:22:40 compute-0 systemd[1]: sshd.service: Unit process 180989 (sshd-session) remains running after unit stopped.
Nov 29 05:22:40 compute-0 systemd[1]: sshd.service: Unit process 189659 (sshd-session) remains running after unit stopped.
Nov 29 05:22:40 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 05:22:40 compute-0 systemd[1]: sshd.service: Consumed 5.035s CPU time, 40.2M memory peak, read 564.0K from disk, written 152.0K to disk.
Nov 29 05:22:40 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 05:22:40 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 29 05:22:40 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 05:22:40 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 05:22:40 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 05:22:40 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 29 05:22:40 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 29 05:22:40 compute-0 sshd[190545]: Server listening on 0.0.0.0 port 22.
Nov 29 05:22:40 compute-0 sshd[190545]: Server listening on :: port 22.
Nov 29 05:22:40 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 29 05:22:40 compute-0 ceph-mon[75176]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:22:41
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.meta']
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:22:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:42 compute-0 ceph-mon[75176]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:42 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 05:22:42 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 05:22:43 compute-0 systemd[1]: Reloading.
Nov 29 05:22:43 compute-0 systemd-sysv-generator[190805]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:22:43 compute-0 systemd-rc-local-generator[190802]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:22:43 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 05:22:43 compute-0 auditd[700]: Audit daemon rotating log files
Nov 29 05:22:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:44 compute-0 sshd-session[189659]: error: kex_exchange_identification: read: Connection timed out
Nov 29 05:22:44 compute-0 sshd-session[189659]: banner exchange: Connection from 120.48.175.69 port 39740: Connection timed out
Nov 29 05:22:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:45 compute-0 ceph-mon[75176]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:46 compute-0 podman[193625]: 2025-11-29 05:22:46.010977122 +0000 UTC m=+0.061130822 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:22:46 compute-0 ceph-mon[75176]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:46 compute-0 sudo[171363]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:47 compute-0 sudo[195133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnkrtwbypwbrcdabrmtyjlttfrpypevx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393766.6110458-336-45819410976109/AnsiballZ_systemd.py'
Nov 29 05:22:47 compute-0 sudo[195133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:22:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:47 compute-0 python3.9[195165]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 05:22:47 compute-0 systemd[1]: Reloading.
Nov 29 05:22:47 compute-0 systemd-rc-local-generator[195591]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:22:47 compute-0 systemd-sysv-generator[195597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:22:48 compute-0 sudo[195133]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:48 compute-0 sudo[196421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aledymusozjxhkzaqyjbydwtebaqmdax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393768.2153447-336-48471977849097/AnsiballZ_systemd.py'
Nov 29 05:22:48 compute-0 sudo[196421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:22:48 compute-0 ceph-mon[75176]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:48 compute-0 python3.9[196443]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 05:22:48 compute-0 systemd[1]: Reloading.
Nov 29 05:22:49 compute-0 systemd-sysv-generator[196809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:22:49 compute-0 systemd-rc-local-generator[196806]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:22:49 compute-0 sudo[196421]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:49 compute-0 sudo[197529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvhcuzxxoypqilftawaordurqahkhbbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393769.4411223-336-100747564437082/AnsiballZ_systemd.py'
Nov 29 05:22:49 compute-0 sudo[197529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:22:50 compute-0 python3.9[197551]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 05:22:50 compute-0 systemd[1]: Reloading.
Nov 29 05:22:50 compute-0 systemd-rc-local-generator[197927]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:22:50 compute-0 systemd-sysv-generator[197933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:22:50 compute-0 sudo[197529]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:50 compute-0 ceph-mon[75176]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:51 compute-0 sudo[198726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eosbnqwkvuwpltybbgvcjqxpydvnbvub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393770.784573-336-277397495658717/AnsiballZ_systemd.py'
Nov 29 05:22:51 compute-0 sudo[198726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:22:51 compute-0 python3.9[198762]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 05:22:51 compute-0 systemd[1]: Reloading.
Nov 29 05:22:51 compute-0 systemd-rc-local-generator[199144]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:22:51 compute-0 systemd-sysv-generator[199152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:22:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:51 compute-0 sudo[198726]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:52 compute-0 sudo[200076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apvjpghzvvhvasqealfobxoilqniemzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393772.0016654-365-23392979833695/AnsiballZ_systemd.py'
Nov 29 05:22:52 compute-0 sudo[200076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:22:52 compute-0 ceph-mon[75176]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 05:22:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 05:22:52 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.394s CPU time.
Nov 29 05:22:52 compute-0 systemd[1]: run-r35a3a9f07b1c4a2bbea754b2120e0f87.service: Deactivated successfully.
Nov 29 05:22:52 compute-0 python3.9[200095]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:22:52 compute-0 systemd[1]: Reloading.
Nov 29 05:22:53 compute-0 systemd-sysv-generator[200160]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:22:53 compute-0 systemd-rc-local-generator[200156]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:22:53 compute-0 sudo[200076]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:53 compute-0 sudo[200312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txgvykaveuahmyhqwugpmgxmlukdtytl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393773.684148-365-121719519675816/AnsiballZ_systemd.py'
Nov 29 05:22:53 compute-0 sudo[200312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:22:54 compute-0 python3.9[200314]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:22:54 compute-0 systemd[1]: Reloading.
Nov 29 05:22:54 compute-0 systemd-rc-local-generator[200346]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:22:54 compute-0 systemd-sysv-generator[200350]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:22:54 compute-0 ceph-mon[75176]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:54 compute-0 sudo[200312]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:55 compute-0 sudo[200502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xknbktlsozhrrvsefhzmvjeddrdcznxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393774.926288-365-274566944300933/AnsiballZ_systemd.py'
Nov 29 05:22:55 compute-0 sudo[200502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:22:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:55 compute-0 python3.9[200504]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:22:55 compute-0 systemd[1]: Reloading.
Nov 29 05:22:55 compute-0 systemd-rc-local-generator[200530]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:22:55 compute-0 systemd-sysv-generator[200537]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:22:56 compute-0 sudo[200502]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:56 compute-0 sudo[200692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqxvgibjfcwqoyhqsaigivvzvgdfladx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393776.1880605-365-4176982257646/AnsiballZ_systemd.py'
Nov 29 05:22:56 compute-0 sudo[200692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:22:56 compute-0 ceph-mon[75176]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:56 compute-0 python3.9[200694]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:22:57 compute-0 sudo[200692]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:57 compute-0 sudo[200847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojrslvwptsrwohyrbixrawvabkhroxrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393777.1905556-365-27606135943297/AnsiballZ_systemd.py'
Nov 29 05:22:57 compute-0 sudo[200847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:22:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:57 compute-0 python3.9[200849]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:22:58 compute-0 ceph-mon[75176]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:58 compute-0 systemd[1]: Reloading.
Nov 29 05:22:59 compute-0 systemd-rc-local-generator[200881]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:22:59 compute-0 systemd-sysv-generator[200885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:22:59 compute-0 sudo[200847]: pam_unix(sudo:session): session closed for user root
Nov 29 05:22:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:22:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:22:59 compute-0 sudo[201037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjzwmxsjuvwzhifreatzhsoalzcakehi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393779.4862857-401-218951594534905/AnsiballZ_systemd.py'
Nov 29 05:22:59 compute-0 sudo[201037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:00 compute-0 python3.9[201039]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 05:23:00 compute-0 systemd[1]: Reloading.
Nov 29 05:23:00 compute-0 systemd-sysv-generator[201076]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:23:00 compute-0 systemd-rc-local-generator[201072]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:23:00 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 05:23:00 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 05:23:00 compute-0 sudo[201037]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:00 compute-0 ceph-mon[75176]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:01 compute-0 sudo[201231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwtdhngdahwtaitnyzjioziuhtlsosjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393780.8775256-409-11618449775956/AnsiballZ_systemd.py'
Nov 29 05:23:01 compute-0 sudo[201231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:01 compute-0 python3.9[201233]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:01 compute-0 sudo[201231]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:01 compute-0 sshd-session[199075]: error: kex_exchange_identification: read: Connection timed out
Nov 29 05:23:01 compute-0 sshd-session[199075]: banner exchange: Connection from 120.48.175.69 port 43686: Connection timed out
Nov 29 05:23:02 compute-0 sudo[201386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpkpigdbndkrlqnezrsdueqeuhhruwkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393781.8393612-409-53662111781847/AnsiballZ_systemd.py'
Nov 29 05:23:02 compute-0 sudo[201386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:02 compute-0 python3.9[201388]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:02 compute-0 sudo[201386]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:02 compute-0 ceph-mon[75176]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:03 compute-0 sudo[201541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjuuqpeooacstdtzlpesnnacsduekxtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393782.774857-409-259634119337498/AnsiballZ_systemd.py'
Nov 29 05:23:03 compute-0 sudo[201541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:03 compute-0 python3.9[201543]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:03 compute-0 sudo[201541]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:04 compute-0 sudo[201696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggsmqhyxcwjagpzrfjezuvmyrccuebwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393783.7468452-409-185359072832455/AnsiballZ_systemd.py'
Nov 29 05:23:04 compute-0 sudo[201696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:04 compute-0 python3.9[201698]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:04 compute-0 sudo[201696]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:04 compute-0 ceph-mon[75176]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:23:04 compute-0 sudo[201851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjsxnvfhmebkbtvxdcuungasaobbqkhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393784.5728328-409-178155302543531/AnsiballZ_systemd.py'
Nov 29 05:23:04 compute-0 sudo[201851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:05 compute-0 python3.9[201853]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:05 compute-0 sudo[201851]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:05 compute-0 sudo[202006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceefcyqlzvmbydcxihvzvgbizxrmcmfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393785.480587-409-102316630112043/AnsiballZ_systemd.py'
Nov 29 05:23:05 compute-0 sudo[202006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:06 compute-0 python3.9[202008]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:06 compute-0 sudo[202006]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:06 compute-0 podman[202011]: 2025-11-29 05:23:06.324647886 +0000 UTC m=+0.123036181 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 05:23:06 compute-0 sudo[202188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osshqetgcrfegtoxqcklaafspannxwof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393786.3873415-409-205881570276666/AnsiballZ_systemd.py'
Nov 29 05:23:06 compute-0 sudo[202188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:06 compute-0 ceph-mon[75176]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:06 compute-0 python3.9[202190]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:07 compute-0 sudo[202188]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:07 compute-0 sudo[202343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixsolreobgcdjrmqciqspmnyjdnmlygd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393787.2076557-409-126083273748665/AnsiballZ_systemd.py'
Nov 29 05:23:07 compute-0 sudo[202343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:07 compute-0 python3.9[202345]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:07 compute-0 sudo[202343]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:08 compute-0 sudo[202498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knrgkogrrrpyztoscrukplxjfweojaob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393788.0674834-409-196993121074295/AnsiballZ_systemd.py'
Nov 29 05:23:08 compute-0 sudo[202498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:08 compute-0 python3.9[202500]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:08 compute-0 sudo[202498]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:08 compute-0 ceph-mon[75176]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:08 compute-0 sudo[202511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:23:08 compute-0 sudo[202511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:08 compute-0 sudo[202511]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:08 compute-0 sudo[202553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:23:08 compute-0 sudo[202553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:08 compute-0 sudo[202553]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:08 compute-0 sudo[202601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:23:08 compute-0 sudo[202601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:08 compute-0 sudo[202601]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:08 compute-0 sudo[202655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:23:08 compute-0 sudo[202655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:09 compute-0 sudo[202765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlgdvpweuqfaxyyweepwcfqvzqahhxah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393788.8400593-409-212587972510237/AnsiballZ_systemd.py'
Nov 29 05:23:09 compute-0 sudo[202765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:09 compute-0 sudo[202655]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:23:09 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:23:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:23:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:23:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:23:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:23:09 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 262e0792-65b7-4508-bc8e-b7a5a41629cf does not exist
Nov 29 05:23:09 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev e9850d3c-6c07-462a-918a-babc66a098dc does not exist
Nov 29 05:23:09 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 72039eec-0efd-43f2-9d3f-dd5d27a4abf8 does not exist
Nov 29 05:23:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:23:09 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:23:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:23:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:23:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:23:09 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:23:09 compute-0 sudo[202785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:23:09 compute-0 sudo[202785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:09 compute-0 sudo[202785]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:09 compute-0 python3.9[202767]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:09 compute-0 sudo[202810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:23:09 compute-0 sudo[202810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:09 compute-0 sudo[202810]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:09 compute-0 sudo[202765]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:09 compute-0 sudo[202838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:23:09 compute-0 sudo[202838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:09 compute-0 sudo[202838]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:09 compute-0 sudo[202884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:23:09 compute-0 sudo[202884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:23:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:23:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:23:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:23:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:23:09 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:23:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:23:10 compute-0 sudo[203088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvttjsiawktwfskenwkzvjscqpoomhsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393789.671649-409-157082107669784/AnsiballZ_systemd.py'
Nov 29 05:23:10 compute-0 sudo[203088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:10 compute-0 podman[203057]: 2025-11-29 05:23:10.032353112 +0000 UTC m=+0.049045355 container create 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 05:23:10 compute-0 systemd[1]: Started libpod-conmon-2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52.scope.
Nov 29 05:23:10 compute-0 podman[203057]: 2025-11-29 05:23:10.01393602 +0000 UTC m=+0.030628293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:23:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:23:10 compute-0 podman[203057]: 2025-11-29 05:23:10.130776269 +0000 UTC m=+0.147468522 container init 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:23:10 compute-0 podman[203057]: 2025-11-29 05:23:10.140875357 +0000 UTC m=+0.157567600 container start 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:23:10 compute-0 podman[203057]: 2025-11-29 05:23:10.144054925 +0000 UTC m=+0.160747188 container attach 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:23:10 compute-0 funny_visvesvaraya[203094]: 167 167
Nov 29 05:23:10 compute-0 systemd[1]: libpod-2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52.scope: Deactivated successfully.
Nov 29 05:23:10 compute-0 podman[203057]: 2025-11-29 05:23:10.152356079 +0000 UTC m=+0.169048352 container died 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 05:23:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e03f77141c3adb1e7a858aaf99c80bd108061cebc669fc9a227e8efa44c47dc-merged.mount: Deactivated successfully.
Nov 29 05:23:10 compute-0 podman[203057]: 2025-11-29 05:23:10.198305897 +0000 UTC m=+0.214998180 container remove 2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:23:10 compute-0 systemd[1]: libpod-conmon-2ea9b32b7c424a36d9576cd9c81a011ad2b3c980b32265807ad3f1c56cb3ae52.scope: Deactivated successfully.
Nov 29 05:23:10 compute-0 python3.9[203091]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:10 compute-0 podman[203117]: 2025-11-29 05:23:10.429732999 +0000 UTC m=+0.055304588 container create 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:23:10 compute-0 systemd[1]: Started libpod-conmon-581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b.scope.
Nov 29 05:23:10 compute-0 podman[203117]: 2025-11-29 05:23:10.410659311 +0000 UTC m=+0.036230930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:23:10 compute-0 sudo[203088]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:10 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:10 compute-0 podman[203117]: 2025-11-29 05:23:10.538148302 +0000 UTC m=+0.163719911 container init 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:23:10 compute-0 podman[203117]: 2025-11-29 05:23:10.550973836 +0000 UTC m=+0.176545425 container start 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:23:10 compute-0 podman[203117]: 2025-11-29 05:23:10.554352239 +0000 UTC m=+0.179923818 container attach 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:23:10 compute-0 ceph-mon[75176]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:11 compute-0 sudo[203290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbemckvlmxphmkzxmcstonqyzhndorjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393790.693018-409-100534086029136/AnsiballZ_systemd.py'
Nov 29 05:23:11 compute-0 sudo[203290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:11 compute-0 python3.9[203292]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:23:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:23:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:23:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:23:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:23:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:23:11 compute-0 sudo[203290]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:11 compute-0 crazy_blackburn[203136]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:23:11 compute-0 crazy_blackburn[203136]: --> relative data size: 1.0
Nov 29 05:23:11 compute-0 crazy_blackburn[203136]: --> All data devices are unavailable
Nov 29 05:23:11 compute-0 systemd[1]: libpod-581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b.scope: Deactivated successfully.
Nov 29 05:23:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:11 compute-0 podman[203117]: 2025-11-29 05:23:11.593621207 +0000 UTC m=+1.219192796 container died 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 05:23:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c96ac8f55e5bd2bf130ff832d863ed21ad266fca3461ef35683bb310f12c5381-merged.mount: Deactivated successfully.
Nov 29 05:23:11 compute-0 podman[203117]: 2025-11-29 05:23:11.652483192 +0000 UTC m=+1.278054771 container remove 581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:23:11 compute-0 systemd[1]: libpod-conmon-581c74a9731837ea96f24aac8b84daee15dee83a90cb270289bb36642c56801b.scope: Deactivated successfully.
Nov 29 05:23:11 compute-0 sudo[202884]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:11 compute-0 sudo[203401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:23:11 compute-0 sudo[203401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:11 compute-0 sudo[203401]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:11 compute-0 sudo[203444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:23:11 compute-0 sudo[203444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:11 compute-0 sudo[203444]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:11 compute-0 sudo[203480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:23:11 compute-0 sudo[203480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:11 compute-0 sudo[203480]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:11 compute-0 sudo[203529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:23:11 compute-0 sudo[203529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:11 compute-0 sudo[203580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkrspjefvykgpqzjkpbdodyubloyhdau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393791.6555831-409-129839491715329/AnsiballZ_systemd.py'
Nov 29 05:23:11 compute-0 sudo[203580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:12 compute-0 podman[203620]: 2025-11-29 05:23:12.270404384 +0000 UTC m=+0.045156240 container create 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:23:12 compute-0 python3.9[203582]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:12 compute-0 systemd[1]: Started libpod-conmon-6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3.scope.
Nov 29 05:23:12 compute-0 podman[203620]: 2025-11-29 05:23:12.246284072 +0000 UTC m=+0.021035918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:23:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:23:12 compute-0 podman[203620]: 2025-11-29 05:23:12.364679968 +0000 UTC m=+0.139431814 container init 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 29 05:23:12 compute-0 podman[203620]: 2025-11-29 05:23:12.372794328 +0000 UTC m=+0.147546154 container start 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:23:12 compute-0 podman[203620]: 2025-11-29 05:23:12.376023967 +0000 UTC m=+0.150775833 container attach 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:23:12 compute-0 serene_merkle[203638]: 167 167
Nov 29 05:23:12 compute-0 systemd[1]: libpod-6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3.scope: Deactivated successfully.
Nov 29 05:23:12 compute-0 podman[203620]: 2025-11-29 05:23:12.378663352 +0000 UTC m=+0.153415208 container died 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:23:12 compute-0 sudo[203580]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f3d6359353b38356c40e9e077fa9d905c9e550b2d3d7050b13a74cbee5b59a9-merged.mount: Deactivated successfully.
Nov 29 05:23:12 compute-0 podman[203620]: 2025-11-29 05:23:12.416790958 +0000 UTC m=+0.191542784 container remove 6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:23:12 compute-0 systemd[1]: libpod-conmon-6b82d0ae3569b25cb11464c892be1f902bf49dce6f1d57050d40ed2240c5d7e3.scope: Deactivated successfully.
Nov 29 05:23:12 compute-0 podman[203694]: 2025-11-29 05:23:12.588224417 +0000 UTC m=+0.054463878 container create 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:23:12 compute-0 systemd[1]: Started libpod-conmon-6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a.scope.
Nov 29 05:23:12 compute-0 podman[203694]: 2025-11-29 05:23:12.568535614 +0000 UTC m=+0.034775115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:23:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:23:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ab0a873b475f961fca63388320d95c4295f9126898c8d427704176fa973912/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ab0a873b475f961fca63388320d95c4295f9126898c8d427704176fa973912/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ab0a873b475f961fca63388320d95c4295f9126898c8d427704176fa973912/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ab0a873b475f961fca63388320d95c4295f9126898c8d427704176fa973912/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:12 compute-0 podman[203694]: 2025-11-29 05:23:12.695127782 +0000 UTC m=+0.161367243 container init 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 05:23:12 compute-0 podman[203694]: 2025-11-29 05:23:12.708697405 +0000 UTC m=+0.174936866 container start 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 05:23:12 compute-0 podman[203694]: 2025-11-29 05:23:12.712058627 +0000 UTC m=+0.178298108 container attach 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:23:12 compute-0 ceph-mon[75176]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:12 compute-0 sudo[203834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlawpjlkiqjwowmarcrvfhdvknvzqury ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393792.5395098-409-92766610427023/AnsiballZ_systemd.py'
Nov 29 05:23:12 compute-0 sudo[203834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:13 compute-0 python3.9[203836]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 05:23:13 compute-0 sudo[203834]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:13 compute-0 pensive_carver[203757]: {
Nov 29 05:23:13 compute-0 pensive_carver[203757]:     "0": [
Nov 29 05:23:13 compute-0 pensive_carver[203757]:         {
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "devices": [
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "/dev/loop3"
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             ],
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_name": "ceph_lv0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_size": "21470642176",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "name": "ceph_lv0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "tags": {
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.cluster_name": "ceph",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.crush_device_class": "",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.encrypted": "0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.osd_id": "0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.type": "block",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.vdo": "0"
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             },
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "type": "block",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "vg_name": "ceph_vg0"
Nov 29 05:23:13 compute-0 pensive_carver[203757]:         }
Nov 29 05:23:13 compute-0 pensive_carver[203757]:     ],
Nov 29 05:23:13 compute-0 pensive_carver[203757]:     "1": [
Nov 29 05:23:13 compute-0 pensive_carver[203757]:         {
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "devices": [
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "/dev/loop4"
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             ],
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_name": "ceph_lv1",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_size": "21470642176",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "name": "ceph_lv1",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "tags": {
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.cluster_name": "ceph",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.crush_device_class": "",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.encrypted": "0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.osd_id": "1",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.type": "block",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.vdo": "0"
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             },
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "type": "block",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "vg_name": "ceph_vg1"
Nov 29 05:23:13 compute-0 pensive_carver[203757]:         }
Nov 29 05:23:13 compute-0 pensive_carver[203757]:     ],
Nov 29 05:23:13 compute-0 pensive_carver[203757]:     "2": [
Nov 29 05:23:13 compute-0 pensive_carver[203757]:         {
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "devices": [
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "/dev/loop5"
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             ],
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_name": "ceph_lv2",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_size": "21470642176",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "name": "ceph_lv2",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "tags": {
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.cluster_name": "ceph",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.crush_device_class": "",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.encrypted": "0",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.osd_id": "2",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.type": "block",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:                 "ceph.vdo": "0"
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             },
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "type": "block",
Nov 29 05:23:13 compute-0 pensive_carver[203757]:             "vg_name": "ceph_vg2"
Nov 29 05:23:13 compute-0 pensive_carver[203757]:         }
Nov 29 05:23:13 compute-0 pensive_carver[203757]:     ]
Nov 29 05:23:13 compute-0 pensive_carver[203757]: }
Nov 29 05:23:13 compute-0 systemd[1]: libpod-6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a.scope: Deactivated successfully.
Nov 29 05:23:13 compute-0 podman[203694]: 2025-11-29 05:23:13.440889613 +0000 UTC m=+0.907129064 container died 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:23:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-96ab0a873b475f961fca63388320d95c4295f9126898c8d427704176fa973912-merged.mount: Deactivated successfully.
Nov 29 05:23:13 compute-0 podman[203694]: 2025-11-29 05:23:13.50674285 +0000 UTC m=+0.972982311 container remove 6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:23:13 compute-0 systemd[1]: libpod-conmon-6f6f8c1f21ed35772ba4cbca6fd3fcbc5864fa00476d81e1eda11fea3f351a4a.scope: Deactivated successfully.
Nov 29 05:23:13 compute-0 sudo[203529]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:13 compute-0 sudo[203906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:23:13 compute-0 sudo[203906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:13 compute-0 sudo[203906]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:13 compute-0 sudo[203958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:23:13 compute-0 sudo[203958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:13 compute-0 sudo[203958]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:13 compute-0 sudo[204006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:23:13 compute-0 sudo[204006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:13 compute-0 sudo[204006]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:23:13.732 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:23:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:23:13.734 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:23:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:23:13.734 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:23:13 compute-0 sudo[204036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:23:13 compute-0 sudo[204036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:13 compute-0 sudo[204106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwksjkgdssszwelrzfrprroaixgxebix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393793.5330975-511-47308804840651/AnsiballZ_file.py'
Nov 29 05:23:13 compute-0 sudo[204106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:14 compute-0 python3.9[204109]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:23:14 compute-0 sudo[204106]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:14 compute-0 podman[204159]: 2025-11-29 05:23:14.156870812 +0000 UTC m=+0.052853008 container create 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:23:14 compute-0 systemd[1]: Started libpod-conmon-80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c.scope.
Nov 29 05:23:14 compute-0 podman[204159]: 2025-11-29 05:23:14.128014454 +0000 UTC m=+0.023996680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:23:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:23:14 compute-0 podman[204159]: 2025-11-29 05:23:14.276587592 +0000 UTC m=+0.172569828 container init 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 29 05:23:14 compute-0 podman[204159]: 2025-11-29 05:23:14.292072932 +0000 UTC m=+0.188055118 container start 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 05:23:14 compute-0 nostalgic_pare[204211]: 167 167
Nov 29 05:23:14 compute-0 podman[204159]: 2025-11-29 05:23:14.295943797 +0000 UTC m=+0.191926023 container attach 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 05:23:14 compute-0 systemd[1]: libpod-80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c.scope: Deactivated successfully.
Nov 29 05:23:14 compute-0 podman[204159]: 2025-11-29 05:23:14.296319956 +0000 UTC m=+0.192302152 container died 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:23:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-32f5466122754c2f9e74b041d916913fe70b8891ea140c22d479967afa68968b-merged.mount: Deactivated successfully.
Nov 29 05:23:14 compute-0 podman[204159]: 2025-11-29 05:23:14.338563243 +0000 UTC m=+0.234545429 container remove 80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:23:14 compute-0 systemd[1]: libpod-conmon-80cc398ceed049db98f76c84b7dd7a9ea04e5b242e904c786cc7e3156dd32c5c.scope: Deactivated successfully.
Nov 29 05:23:14 compute-0 sudo[204349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajzvkueollyczziqermegscpbiiotkjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393794.222105-511-28619528249586/AnsiballZ_file.py'
Nov 29 05:23:14 compute-0 sudo[204349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:14 compute-0 podman[204311]: 2025-11-29 05:23:14.574048715 +0000 UTC m=+0.059254125 container create d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:23:14 compute-0 systemd[1]: Started libpod-conmon-d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791.scope.
Nov 29 05:23:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0474c531b42668abb8c743e698d97fc00e057ee1bcfb92260ffdf33efbf9e68b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0474c531b42668abb8c743e698d97fc00e057ee1bcfb92260ffdf33efbf9e68b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0474c531b42668abb8c743e698d97fc00e057ee1bcfb92260ffdf33efbf9e68b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0474c531b42668abb8c743e698d97fc00e057ee1bcfb92260ffdf33efbf9e68b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:23:14 compute-0 podman[204311]: 2025-11-29 05:23:14.549525574 +0000 UTC m=+0.034730974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:23:14 compute-0 podman[204311]: 2025-11-29 05:23:14.663593794 +0000 UTC m=+0.148799184 container init d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:23:14 compute-0 podman[204311]: 2025-11-29 05:23:14.669609651 +0000 UTC m=+0.154815031 container start d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:23:14 compute-0 podman[204311]: 2025-11-29 05:23:14.672120883 +0000 UTC m=+0.157326263 container attach d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:23:14 compute-0 ceph-mon[75176]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:23:14 compute-0 python3.9[204353]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:23:14 compute-0 sudo[204349]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:15 compute-0 sudo[204511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xohjocvdotdybtcccsisluiwgallzygz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393795.0051246-511-121763769421618/AnsiballZ_file.py'
Nov 29 05:23:15 compute-0 sudo[204511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:15 compute-0 python3.9[204514]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:23:15 compute-0 sudo[204511]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]: {
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "osd_id": 0,
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "type": "bluestore"
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:     },
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "osd_id": 1,
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "type": "bluestore"
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:     },
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "osd_id": 2,
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:         "type": "bluestore"
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]:     }
Nov 29 05:23:15 compute-0 upbeat_driscoll[204357]: }
Nov 29 05:23:15 compute-0 systemd[1]: libpod-d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791.scope: Deactivated successfully.
Nov 29 05:23:15 compute-0 podman[204311]: 2025-11-29 05:23:15.588027861 +0000 UTC m=+1.073233241 container died d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 05:23:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-0474c531b42668abb8c743e698d97fc00e057ee1bcfb92260ffdf33efbf9e68b-merged.mount: Deactivated successfully.
Nov 29 05:23:15 compute-0 podman[204311]: 2025-11-29 05:23:15.641553446 +0000 UTC m=+1.126758826 container remove d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_driscoll, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:23:15 compute-0 systemd[1]: libpod-conmon-d6f7bea9398cfde8b4ba0e4682ffac61165378ba2a45f9bd901a9cdac184a791.scope: Deactivated successfully.
Nov 29 05:23:15 compute-0 sudo[204036]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:23:15 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:23:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:23:15 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:23:15 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 68f1e717-6d27-47be-9785-5b8db4d6ba61 does not exist
Nov 29 05:23:15 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 9fa41ef3-8a02-47c8-b2ed-b30225615518 does not exist
Nov 29 05:23:15 compute-0 sudo[204629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:23:15 compute-0 sudo[204629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:15 compute-0 sudo[204629]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:15 compute-0 sudo[204677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:23:15 compute-0 sudo[204677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:23:15 compute-0 sudo[204677]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:15 compute-0 sudo[204752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfayodamofzhbrlaryifsithsiwtdzpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393795.649231-511-7605850391577/AnsiballZ_file.py'
Nov 29 05:23:15 compute-0 sudo[204752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:16 compute-0 python3.9[204754]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:23:16 compute-0 sudo[204752]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:16 compute-0 ceph-mon[75176]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:16 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:23:16 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:23:16 compute-0 sudo[204916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygjhxhvxpikhxkwgsfvomujplisswsyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393796.3389845-511-96366760673595/AnsiballZ_file.py'
Nov 29 05:23:16 compute-0 sudo[204916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:16 compute-0 podman[204878]: 2025-11-29 05:23:16.98167775 +0000 UTC m=+0.083460700 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 05:23:17 compute-0 python3.9[204922]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:23:17 compute-0 sudo[204916]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:17 compute-0 sudo[205072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvgtzbtvprmbqjucdxgpktohzgkecibr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393797.3654041-511-227861937456402/AnsiballZ_file.py'
Nov 29 05:23:17 compute-0 sudo[205072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:17 compute-0 python3.9[205074]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:23:17 compute-0 sudo[205072]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:18 compute-0 sudo[205224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzyujmcudpeablluuvoepcntcnjwlvle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393798.163104-554-151506864458786/AnsiballZ_stat.py'
Nov 29 05:23:18 compute-0 sudo[205224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:18 compute-0 ceph-mon[75176]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:18 compute-0 python3.9[205226]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:18 compute-0 sudo[205224]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:19 compute-0 sudo[205349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zimyhxxdfeendwkuhxmpxcgwxgsdxren ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393798.163104-554-151506864458786/AnsiballZ_copy.py'
Nov 29 05:23:19 compute-0 sudo[205349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:19 compute-0 python3.9[205351]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393798.163104-554-151506864458786/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:19 compute-0 sudo[205349]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:23:20 compute-0 sudo[205501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htgwlfcyibwzqqmqnnpsguibgntjeefk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393799.8034317-554-151642909029897/AnsiballZ_stat.py'
Nov 29 05:23:20 compute-0 sudo[205501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:20 compute-0 python3.9[205503]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:20 compute-0 sudo[205501]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:20 compute-0 ceph-mon[75176]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:20 compute-0 sudo[205626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wonjgksxywtxgtlzncpwscyafnsajunt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393799.8034317-554-151642909029897/AnsiballZ_copy.py'
Nov 29 05:23:20 compute-0 sudo[205626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:21 compute-0 python3.9[205628]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393799.8034317-554-151642909029897/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:21 compute-0 sudo[205626]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:21 compute-0 sudo[205778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umwkvzbjtcgwlcltdclffudboltuknyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393801.2577085-554-39836754554528/AnsiballZ_stat.py'
Nov 29 05:23:21 compute-0 sudo[205778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:21 compute-0 python3.9[205780]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:21 compute-0 sudo[205778]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:22 compute-0 sudo[205903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtzbrhtazallxgigirkdoaixjcvxjkhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393801.2577085-554-39836754554528/AnsiballZ_copy.py'
Nov 29 05:23:22 compute-0 sudo[205903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:22 compute-0 python3.9[205905]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393801.2577085-554-39836754554528/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:22 compute-0 sudo[205903]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:22 compute-0 ceph-mon[75176]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:23 compute-0 sudo[206055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bysgxolwxcqfrvcxmzdnbmnuncgojctu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393802.8274128-554-60614321737235/AnsiballZ_stat.py'
Nov 29 05:23:23 compute-0 sudo[206055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:23 compute-0 python3.9[206057]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:23 compute-0 sudo[206055]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:23 compute-0 sudo[206180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbkegfxkhhzrkcyvfzdxlmoecfxlhhpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393802.8274128-554-60614321737235/AnsiballZ_copy.py'
Nov 29 05:23:23 compute-0 sudo[206180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:24 compute-0 python3.9[206182]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393802.8274128-554-60614321737235/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:24 compute-0 sudo[206180]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:24 compute-0 ceph-mon[75176]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:23:24 compute-0 sudo[206332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqjatajmgucjyuzgmpcvzfruihoblggi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393804.2763977-554-9164039879860/AnsiballZ_stat.py'
Nov 29 05:23:24 compute-0 sudo[206332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:25 compute-0 python3.9[206334]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:25 compute-0 sudo[206332]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:25 compute-0 sudo[206457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swjfszzxjgdfoqmtuxdkdjrqphaozpbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393804.2763977-554-9164039879860/AnsiballZ_copy.py'
Nov 29 05:23:25 compute-0 sudo[206457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:25 compute-0 python3.9[206459]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393804.2763977-554-9164039879860/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:25 compute-0 sudo[206457]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:26 compute-0 sudo[206609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjigorvmwukpvipmloezjwundsickioq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393805.8464243-554-237896874963711/AnsiballZ_stat.py'
Nov 29 05:23:26 compute-0 sudo[206609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:26 compute-0 python3.9[206611]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:26 compute-0 sudo[206609]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:26 compute-0 ceph-mon[75176]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:26 compute-0 sudo[206734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rponsasdzaatqlulrhqhfkdhfsxtkoap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393805.8464243-554-237896874963711/AnsiballZ_copy.py'
Nov 29 05:23:26 compute-0 sudo[206734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:26 compute-0 python3.9[206736]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393805.8464243-554-237896874963711/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:27 compute-0 sudo[206734]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:27 compute-0 sudo[206886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amsqhskenkcuifzogwubiglycdhklzjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393807.1231892-554-172629975962439/AnsiballZ_stat.py'
Nov 29 05:23:27 compute-0 sudo[206886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:27 compute-0 python3.9[206888]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:27 compute-0 sudo[206886]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:28 compute-0 sudo[207009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuewvhishpkyjidbdftcxlqepjieunta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393807.1231892-554-172629975962439/AnsiballZ_copy.py'
Nov 29 05:23:28 compute-0 sudo[207009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:28 compute-0 python3.9[207011]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393807.1231892-554-172629975962439/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:28 compute-0 sudo[207009]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:28 compute-0 ceph-mon[75176]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:28 compute-0 sudo[207161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyriqbqwcgodwbccbfxpizlobjdlykbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393808.4609098-554-165545473784338/AnsiballZ_stat.py'
Nov 29 05:23:28 compute-0 sudo[207161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:29 compute-0 python3.9[207163]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:29 compute-0 sudo[207161]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:29 compute-0 sudo[207286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myboadleemexvaccelkclhvhthdrihhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393808.4609098-554-165545473784338/AnsiballZ_copy.py'
Nov 29 05:23:29 compute-0 sudo[207286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:23:29 compute-0 python3.9[207288]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764393808.4609098-554-165545473784338/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:29 compute-0 sudo[207286]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:30 compute-0 sudo[207438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjcfdsswmksjwakaqenigawrsmpbcdsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393810.0938933-667-243084405753030/AnsiballZ_command.py'
Nov 29 05:23:30 compute-0 sudo[207438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:30 compute-0 python3.9[207440]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 05:23:30 compute-0 sudo[207438]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:30 compute-0 ceph-mon[75176]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:31 compute-0 sudo[207591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsixxuqbmnrapqgtypqqoetqsxmwkxtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393811.0132205-676-88083825905409/AnsiballZ_file.py'
Nov 29 05:23:31 compute-0 sudo[207591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:31 compute-0 python3.9[207593]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:31 compute-0 sudo[207591]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:32 compute-0 sudo[207743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nglsmmvowkxhlnrxhvnyeaboyfgujtgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393811.854087-676-65737777092399/AnsiballZ_file.py'
Nov 29 05:23:32 compute-0 sudo[207743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:32 compute-0 python3.9[207745]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:32 compute-0 sudo[207743]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:32 compute-0 ceph-mon[75176]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:32 compute-0 sudo[207896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndknzlkhjqizldzpzfxcabyfdsmiuuxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393812.6482787-676-57648109277630/AnsiballZ_file.py'
Nov 29 05:23:32 compute-0 sudo[207896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:33 compute-0 python3.9[207898]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:33 compute-0 sudo[207896]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:33 compute-0 sudo[208048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpkrtgiuzntbcvnigvhvezqovdwqcctb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393813.408482-676-109252162184488/AnsiballZ_file.py'
Nov 29 05:23:33 compute-0 sudo[208048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:33 compute-0 python3.9[208050]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:34 compute-0 sudo[208048]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:34 compute-0 sudo[208200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkunxbpczttmaxmmfmdaiuouwsydqxjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393814.1788783-676-25889497859026/AnsiballZ_file.py'
Nov 29 05:23:34 compute-0 sudo[208200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:34 compute-0 python3.9[208202]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:34 compute-0 sudo[208200]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:23:34 compute-0 ceph-mon[75176]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:35 compute-0 sudo[208352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygidxkcvhgnpbetemkanbfxvqevhrtnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393814.9090753-676-99142127027949/AnsiballZ_file.py'
Nov 29 05:23:35 compute-0 sudo[208352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:35 compute-0 python3.9[208354]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:35 compute-0 sudo[208352]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:35 compute-0 sudo[208504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtsutyxxwpqkfagrieflcemklmrfeqqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393815.5849638-676-195534346102709/AnsiballZ_file.py'
Nov 29 05:23:35 compute-0 sudo[208504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:36 compute-0 python3.9[208506]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:36 compute-0 sudo[208504]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:36 compute-0 sudo[208673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crkaegdifeghfklltgwtbzghkujjzeph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393816.2733607-676-256604372330437/AnsiballZ_file.py'
Nov 29 05:23:36 compute-0 sudo[208673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:36 compute-0 podman[208630]: 2025-11-29 05:23:36.66333089 +0000 UTC m=+0.095655841 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 05:23:36 compute-0 ceph-mon[75176]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:36 compute-0 python3.9[208680]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:36 compute-0 sudo[208673]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:37 compute-0 sudo[208834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xohydvfxtnufziyrnqypvqzwypxoqhks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393817.0546024-676-229681312152178/AnsiballZ_file.py'
Nov 29 05:23:37 compute-0 sudo[208834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:37 compute-0 python3.9[208836]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:37 compute-0 sudo[208834]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:38 compute-0 sudo[208986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgfxyrhcnjvxphvkehaxilnvnnfyyfzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393817.941083-676-117370456882059/AnsiballZ_file.py'
Nov 29 05:23:38 compute-0 sudo[208986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:38 compute-0 python3.9[208988]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:38 compute-0 sudo[208986]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:38 compute-0 ceph-mon[75176]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:39 compute-0 sudo[209138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqrnjnkcovnocajtemlbapvpcpljngdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393818.7650898-676-184509440708396/AnsiballZ_file.py'
Nov 29 05:23:39 compute-0 sudo[209138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:39 compute-0 python3.9[209140]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:39 compute-0 sudo[209138]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:23:39 compute-0 sudo[209290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyvnottvytboekkjcgwelsguplhmgkke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393819.5763578-676-49332960991984/AnsiballZ_file.py'
Nov 29 05:23:39 compute-0 sudo[209290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:40 compute-0 python3.9[209292]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:40 compute-0 sudo[209290]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:40 compute-0 sudo[209442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekikiymvxqwxnjcoksqvhdacugloulta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393820.2546773-676-43406718679858/AnsiballZ_file.py'
Nov 29 05:23:40 compute-0 sudo[209442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:40 compute-0 python3.9[209444]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:40 compute-0 sudo[209442]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:40 compute-0 ceph-mon[75176]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:41 compute-0 sudo[209594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jphhjqpproakjusxwwnbznoicmhdizdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393820.9526825-676-176717332530232/AnsiballZ_file.py'
Nov 29 05:23:41 compute-0 sudo[209594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:23:41
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'backups', 'default.rgw.log', 'volumes', 'images']
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:23:41 compute-0 python3.9[209596]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:41 compute-0 sudo[209594]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:42 compute-0 sudo[209746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tycagtweefvkaqzebexdqjtoksypzbwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393821.6863337-775-645609685603/AnsiballZ_stat.py'
Nov 29 05:23:42 compute-0 sudo[209746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:42 compute-0 python3.9[209748]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:42 compute-0 sudo[209746]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:42 compute-0 sudo[209869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvivoypfesezxkzmsfmuswichdfrfxap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393821.6863337-775-645609685603/AnsiballZ_copy.py'
Nov 29 05:23:42 compute-0 sudo[209869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:42 compute-0 ceph-mon[75176]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:42 compute-0 python3.9[209871]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393821.6863337-775-645609685603/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:42 compute-0 sudo[209869]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:43 compute-0 sshd-session[207839]: error: kex_exchange_identification: read: Connection timed out
Nov 29 05:23:43 compute-0 sshd-session[207839]: banner exchange: Connection from 120.48.175.69 port 51538: Connection timed out
Nov 29 05:23:43 compute-0 sudo[210021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfirrjsxybftprygvdetcmtqaxquwddo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393823.1811814-775-164234158049912/AnsiballZ_stat.py'
Nov 29 05:23:43 compute-0 sudo[210021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:43 compute-0 python3.9[210023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:43 compute-0 sudo[210021]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:44 compute-0 sudo[210144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfmvtdwbjabrjfqidloewaizvwncgbpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393823.1811814-775-164234158049912/AnsiballZ_copy.py'
Nov 29 05:23:44 compute-0 sudo[210144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:44 compute-0 python3.9[210146]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393823.1811814-775-164234158049912/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:44 compute-0 sudo[210144]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:23:44 compute-0 ceph-mon[75176]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:45 compute-0 sudo[210296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftuydgsskgapghdjksudhcgxkxtoyeor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393824.8343341-775-35960863179172/AnsiballZ_stat.py'
Nov 29 05:23:45 compute-0 sudo[210296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:45 compute-0 python3.9[210298]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:45 compute-0 sudo[210296]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:45 compute-0 sudo[210419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpjqpeyzqwzddegwotzpztdcfsqywumm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393824.8343341-775-35960863179172/AnsiballZ_copy.py'
Nov 29 05:23:45 compute-0 sudo[210419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:46 compute-0 python3.9[210421]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393824.8343341-775-35960863179172/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:46 compute-0 sudo[210419]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:46 compute-0 sudo[210571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwmtbzbmluqyoyisebhzfzejxuimzepz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393826.2467074-775-83054989246832/AnsiballZ_stat.py'
Nov 29 05:23:46 compute-0 sudo[210571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:46 compute-0 python3.9[210573]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:46 compute-0 ceph-mon[75176]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:46 compute-0 sudo[210571]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:47 compute-0 sudo[210711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjvsdxaicsrxcsbxofvgfqfaauloskjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393826.2467074-775-83054989246832/AnsiballZ_copy.py'
Nov 29 05:23:47 compute-0 sudo[210711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:47 compute-0 podman[210670]: 2025-11-29 05:23:47.319130808 +0000 UTC m=+0.065003964 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 05:23:47 compute-0 python3.9[210717]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393826.2467074-775-83054989246832/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:47 compute-0 sudo[210711]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:47 compute-0 sudo[210867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kenwvubwgofzmxiyihlkokhmcvanhpis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393827.67271-775-260302718056186/AnsiballZ_stat.py'
Nov 29 05:23:47 compute-0 sudo[210867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:48 compute-0 python3.9[210869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:48 compute-0 sudo[210867]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:48 compute-0 sshd-session[210644]: Invalid user nominatim from 45.120.216.232 port 47634
Nov 29 05:23:48 compute-0 sshd-session[210644]: Received disconnect from 45.120.216.232 port 47634:11: Bye Bye [preauth]
Nov 29 05:23:48 compute-0 sshd-session[210644]: Disconnected from invalid user nominatim 45.120.216.232 port 47634 [preauth]
Nov 29 05:23:48 compute-0 sudo[210990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anrfjjxezugkbenapvmeyvriacqedifj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393827.67271-775-260302718056186/AnsiballZ_copy.py'
Nov 29 05:23:48 compute-0 sudo[210990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:48 compute-0 ceph-mon[75176]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:48 compute-0 python3.9[210992]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393827.67271-775-260302718056186/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:48 compute-0 sudo[210990]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:49 compute-0 sudo[211142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kktndxokgokafrueccligvdhhhrkttvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393829.097444-775-99970257724439/AnsiballZ_stat.py'
Nov 29 05:23:49 compute-0 sudo[211142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:49 compute-0 python3.9[211144]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:49 compute-0 sudo[211142]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:23:50 compute-0 sudo[211265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdakxofxkkzugqcqzxnqzuiuckngdyqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393829.097444-775-99970257724439/AnsiballZ_copy.py'
Nov 29 05:23:50 compute-0 sudo[211265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:50 compute-0 python3.9[211267]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393829.097444-775-99970257724439/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:50 compute-0 sudo[211265]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:50 compute-0 ceph-mon[75176]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:50 compute-0 sudo[211417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfiawpcuvbjkuxnbricpckqpqaqhvfeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393830.6536858-775-208646610600719/AnsiballZ_stat.py'
Nov 29 05:23:50 compute-0 sudo[211417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:51 compute-0 python3.9[211419]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:23:51 compute-0 sudo[211417]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:51 compute-0 sudo[211540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhhtmvugkjiswiepqzgohqkedfjijsmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393830.6536858-775-208646610600719/AnsiballZ_copy.py'
Nov 29 05:23:51 compute-0 sudo[211540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:51 compute-0 python3.9[211542]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393830.6536858-775-208646610600719/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:51 compute-0 sudo[211540]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:52 compute-0 sudo[211693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptbczoglobssdffgcgocemdcdochuoay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393832.2176907-775-58879929467664/AnsiballZ_stat.py'
Nov 29 05:23:52 compute-0 sudo[211693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:52 compute-0 python3.9[211695]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:52 compute-0 sudo[211693]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:52 compute-0 ceph-mon[75176]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:53 compute-0 sudo[211816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrmccdyvaqyalkjjzclqsyseyjktglpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393832.2176907-775-58879929467664/AnsiballZ_copy.py'
Nov 29 05:23:53 compute-0 sudo[211816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:53 compute-0 python3.9[211818]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393832.2176907-775-58879929467664/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:53 compute-0 sudo[211816]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:53 compute-0 sudo[211968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpkghhbpwieheeumtfflbyjywbaxjbzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393833.6335523-775-53640796163593/AnsiballZ_stat.py'
Nov 29 05:23:53 compute-0 sudo[211968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:54 compute-0 python3.9[211970]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:54 compute-0 sudo[211968]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:54 compute-0 sudo[212091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpaymjqeqebzeiwejfsgkyjjopbqunek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393833.6335523-775-53640796163593/AnsiballZ_copy.py'
Nov 29 05:23:54 compute-0 sudo[212091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:23:54 compute-0 ceph-mon[75176]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:54 compute-0 python3.9[212093]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393833.6335523-775-53640796163593/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:54 compute-0 sudo[212091]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:55 compute-0 sudo[212243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnjacqylyqwrhhthamzvlhlkgkiuikkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393835.0826201-775-221247073141783/AnsiballZ_stat.py'
Nov 29 05:23:55 compute-0 sudo[212243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:55 compute-0 python3.9[212245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:55 compute-0 sudo[212243]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:56 compute-0 sudo[212366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcimmyocwdzmiywtjoniduddlgwcbcvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393835.0826201-775-221247073141783/AnsiballZ_copy.py'
Nov 29 05:23:56 compute-0 sudo[212366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:56 compute-0 python3.9[212368]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393835.0826201-775-221247073141783/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:56 compute-0 sudo[212366]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:56 compute-0 sudo[212518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpsottnrewxypmfgiyhrgoptaonbajue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393836.5555768-775-50405147168642/AnsiballZ_stat.py'
Nov 29 05:23:56 compute-0 sudo[212518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:56 compute-0 ceph-mon[75176]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:57 compute-0 python3.9[212520]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:57 compute-0 sudo[212518]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:57 compute-0 sudo[212641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iozoommwonzxnodetfqocrcyvpamvnqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393836.5555768-775-50405147168642/AnsiballZ_copy.py'
Nov 29 05:23:57 compute-0 sudo[212641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:57 compute-0 python3.9[212643]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393836.5555768-775-50405147168642/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:57 compute-0 sudo[212641]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:58 compute-0 sudo[212793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgwmosgzwayhdruigdzaqfhphwfjdcgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393837.9073584-775-85194055376253/AnsiballZ_stat.py'
Nov 29 05:23:58 compute-0 sudo[212793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:58 compute-0 python3.9[212795]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:58 compute-0 sudo[212793]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:58 compute-0 sudo[212916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzmmpgrzaibsjjetknaqgasjtnswgcwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393837.9073584-775-85194055376253/AnsiballZ_copy.py'
Nov 29 05:23:58 compute-0 sudo[212916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:58 compute-0 ceph-mon[75176]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:59 compute-0 python3.9[212918]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393837.9073584-775-85194055376253/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:23:59 compute-0 sudo[212916]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:59 compute-0 sudo[213068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crlzebesetdernfrzhnzglqhozawjljd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393839.2397733-775-150481900540856/AnsiballZ_stat.py'
Nov 29 05:23:59 compute-0 sudo[213068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:23:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:23:59 compute-0 python3.9[213070]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:23:59 compute-0 sudo[213068]: pam_unix(sudo:session): session closed for user root
Nov 29 05:23:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:00 compute-0 sudo[213191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnckoxnbpfmrxcxyabasxyqeknlxrcdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393839.2397733-775-150481900540856/AnsiballZ_copy.py'
Nov 29 05:24:00 compute-0 sudo[213191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:00 compute-0 python3.9[213193]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393839.2397733-775-150481900540856/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:00 compute-0 sudo[213191]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:00 compute-0 sudo[213343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wujlzmdfwgtkjfttvxmjpjiithbawgpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393840.4134438-775-65207750387281/AnsiballZ_stat.py'
Nov 29 05:24:00 compute-0 sudo[213343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:00 compute-0 ceph-mon[75176]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:00 compute-0 python3.9[213345]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:00 compute-0 sudo[213343]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:01 compute-0 sudo[213466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtyfzobhrolzvawltlobxsyktvfcwrxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393840.4134438-775-65207750387281/AnsiballZ_copy.py'
Nov 29 05:24:01 compute-0 sudo[213466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:01 compute-0 python3.9[213468]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393840.4134438-775-65207750387281/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:01 compute-0 sudo[213466]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:02 compute-0 python3.9[213618]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:24:02 compute-0 ceph-mon[75176]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:03 compute-0 sudo[213771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqnpwsbqdhrvqgdhdbaqelbuvryhioft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393842.5797474-981-96066832704211/AnsiballZ_seboolean.py'
Nov 29 05:24:03 compute-0 sudo[213771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:03 compute-0 python3.9[213773]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 05:24:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:04 compute-0 sudo[213771]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:04 compute-0 ceph-mon[75176]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:05 compute-0 sudo[213927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfhbfigwcrecmatjiowihbqlnwpvrcrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393844.8058634-989-51721682970238/AnsiballZ_copy.py'
Nov 29 05:24:05 compute-0 dbus-broker-launch[770]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 05:24:05 compute-0 sudo[213927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:05 compute-0 python3.9[213929]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:05 compute-0 sudo[213927]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:05 compute-0 sudo[214079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yujngasdqbukmticjfhlkftnpuecbcqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393845.5211868-989-39044725633780/AnsiballZ_copy.py'
Nov 29 05:24:05 compute-0 sudo[214079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:06 compute-0 python3.9[214081]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:06 compute-0 sudo[214079]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:06 compute-0 sudo[214231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdebmarbrrpcrsykoctcauqnndpnsyse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393846.3596091-989-153126195622402/AnsiballZ_copy.py'
Nov 29 05:24:06 compute-0 sudo[214231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:06 compute-0 podman[214233]: 2025-11-29 05:24:06.879453705 +0000 UTC m=+0.108149667 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:24:06 compute-0 python3.9[214234]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:06 compute-0 ceph-mon[75176]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:06 compute-0 sudo[214231]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:07 compute-0 sudo[214412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wntmojaiaotqtecxurdqghljrprainkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393847.1496546-989-19403441868619/AnsiballZ_copy.py'
Nov 29 05:24:07 compute-0 sudo[214412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:07 compute-0 python3.9[214414]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:07 compute-0 sudo[214412]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:07 compute-0 sshd-session[214267]: Invalid user nrk from 152.32.145.111 port 58210
Nov 29 05:24:08 compute-0 sshd-session[214267]: Received disconnect from 152.32.145.111 port 58210:11: Bye Bye [preauth]
Nov 29 05:24:08 compute-0 sshd-session[214267]: Disconnected from invalid user nrk 152.32.145.111 port 58210 [preauth]
Nov 29 05:24:08 compute-0 sudo[214564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxrqwntpaybcvtvemalbqglbdavzhvyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393847.9596105-989-80851715449426/AnsiballZ_copy.py'
Nov 29 05:24:08 compute-0 sudo[214564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:08 compute-0 python3.9[214566]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:08 compute-0 sudo[214564]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:08 compute-0 ceph-mon[75176]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:09 compute-0 sudo[214716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqzbulpaiqermplyjhktzmifvlgkgdrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393848.810234-1025-33047954593815/AnsiballZ_copy.py'
Nov 29 05:24:09 compute-0 sudo[214716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:09 compute-0 python3.9[214718]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:09 compute-0 sudo[214716]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:09 compute-0 sudo[214868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anmcxuquyzrfxbkemjkoamxlnlopvpen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393849.5478425-1025-14838024099909/AnsiballZ_copy.py'
Nov 29 05:24:09 compute-0 sudo[214868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:10 compute-0 python3.9[214870]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:10 compute-0 sudo[214868]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:10 compute-0 sudo[215020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjkqtlntcjzdgwngkevizujdqqwjanyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393850.2033424-1025-228499350700940/AnsiballZ_copy.py'
Nov 29 05:24:10 compute-0 sudo[215020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:10 compute-0 python3.9[215022]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:10 compute-0 sudo[215020]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:10 compute-0 ceph-mon[75176]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:24:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:24:11 compute-0 sudo[215172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsgetxvxokhkrcbxdzrvurssowdntubu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393850.9704692-1025-82664353046589/AnsiballZ_copy.py'
Nov 29 05:24:11 compute-0 sudo[215172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:24:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:24:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:24:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:24:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:11 compute-0 python3.9[215174]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:11 compute-0 sudo[215172]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:12 compute-0 sudo[215325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cunvolzgmxzpldeqbvnodbojaousuzst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393851.8368711-1025-147877475111013/AnsiballZ_copy.py'
Nov 29 05:24:12 compute-0 sudo[215325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:12 compute-0 python3.9[215327]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:12 compute-0 sudo[215325]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:13 compute-0 ceph-mon[75176]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:13 compute-0 sudo[215477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umuaapdzudzjkhreztdblsgywnidttvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393852.6698923-1061-279694317388893/AnsiballZ_systemd.py'
Nov 29 05:24:13 compute-0 sudo[215477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:13 compute-0 python3.9[215479]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:24:13 compute-0 systemd[1]: Reloading.
Nov 29 05:24:13 compute-0 systemd-rc-local-generator[215502]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:24:13 compute-0 systemd-sysv-generator[215507]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:24:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:24:13.734 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:24:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:24:13.735 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:24:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:24:13.735 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:24:13 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 05:24:13 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 05:24:13 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 05:24:13 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 05:24:13 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 29 05:24:13 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 29 05:24:13 compute-0 sudo[215477]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:14 compute-0 ceph-mon[75176]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:14 compute-0 sudo[215670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdwrrlcdnzjpjidjcrtoxbkmdawggvos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393854.1876287-1061-135212162376223/AnsiballZ_systemd.py'
Nov 29 05:24:14 compute-0 sudo[215670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:14 compute-0 python3.9[215672]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:24:14 compute-0 systemd[1]: Reloading.
Nov 29 05:24:15 compute-0 systemd-rc-local-generator[215691]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:24:15 compute-0 systemd-sysv-generator[215698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:24:15 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 05:24:15 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 05:24:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 05:24:15 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 05:24:15 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 05:24:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 05:24:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 05:24:15 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 05:24:15 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 29 05:24:15 compute-0 sudo[215670]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:15 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 05:24:15 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 05:24:15 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 05:24:15 compute-0 sudo[215844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:24:15 compute-0 sudo[215844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:15 compute-0 sudo[215844]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:16 compute-0 sudo[215892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:24:16 compute-0 sudo[215892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:16 compute-0 sudo[215892]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:16 compute-0 sudo[215944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzdkvvmawlxlzyhbsepwmxehormksthf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393855.687601-1061-150910452385737/AnsiballZ_systemd.py'
Nov 29 05:24:16 compute-0 sudo[215944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:16 compute-0 sudo[215946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:24:16 compute-0 sudo[215946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:16 compute-0 sudo[215946]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:16 compute-0 sudo[215974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:24:16 compute-0 sudo[215974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:16 compute-0 python3.9[215949]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:24:16 compute-0 systemd[1]: Reloading.
Nov 29 05:24:16 compute-0 systemd-sysv-generator[216038]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:24:16 compute-0 systemd-rc-local-generator[216034]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:24:16 compute-0 ceph-mon[75176]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:16 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 05:24:16 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 05:24:16 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 05:24:16 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 05:24:16 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 29 05:24:16 compute-0 setroubleshoot[215709]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 68bc54e6-e2dc-49b4-b12f-1375125e19a3
Nov 29 05:24:16 compute-0 sudo[215974]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:16 compute-0 setroubleshoot[215709]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 29 05:24:16 compute-0 setroubleshoot[215709]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 68bc54e6-e2dc-49b4-b12f-1375125e19a3
Nov 29 05:24:16 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 29 05:24:16 compute-0 setroubleshoot[215709]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 29 05:24:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:24:16 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:24:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:24:16 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:24:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:24:16 compute-0 sudo[215944]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:16 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:24:16 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 5fc82019-45c9-4cc5-b242-7f76948dbbbf does not exist
Nov 29 05:24:16 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 6c387c9b-f21e-434b-865e-16297cfc1046 does not exist
Nov 29 05:24:16 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ea77a73d-224d-48c5-a8b4-baa6bbb0ceba does not exist
Nov 29 05:24:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:24:16 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:24:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:24:16 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:24:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:24:16 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:24:16 compute-0 sudo[216092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:24:16 compute-0 sudo[216092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:16 compute-0 sudo[216092]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:16 compute-0 sudo[216141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:24:16 compute-0 sudo[216141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:16 compute-0 sudo[216141]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:17 compute-0 sudo[216189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:24:17 compute-0 sudo[216189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:17 compute-0 sudo[216189]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:17 compute-0 sudo[216242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:24:17 compute-0 sudo[216242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:17 compute-0 sudo[216377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozukslptllnzrfwgkqliampevwumzsob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393857.0024185-1061-106618940667796/AnsiballZ_systemd.py'
Nov 29 05:24:17 compute-0 sudo[216377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:17 compute-0 podman[216328]: 2025-11-29 05:24:17.462113458 +0000 UTC m=+0.079888706 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:24:17 compute-0 podman[216402]: 2025-11-29 05:24:17.532422827 +0000 UTC m=+0.041650680 container create 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 05:24:17 compute-0 systemd[1]: Started libpod-conmon-5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa.scope.
Nov 29 05:24:17 compute-0 podman[216402]: 2025-11-29 05:24:17.513642251 +0000 UTC m=+0.022870104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:24:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:24:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:17 compute-0 podman[216402]: 2025-11-29 05:24:17.626439828 +0000 UTC m=+0.135667701 container init 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:24:17 compute-0 podman[216402]: 2025-11-29 05:24:17.635885982 +0000 UTC m=+0.145113815 container start 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:24:17 compute-0 podman[216402]: 2025-11-29 05:24:17.639333133 +0000 UTC m=+0.148560986 container attach 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 05:24:17 compute-0 dreamy_fermat[216418]: 167 167
Nov 29 05:24:17 compute-0 systemd[1]: libpod-5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa.scope: Deactivated successfully.
Nov 29 05:24:17 compute-0 conmon[216418]: conmon 5a8d825701377ad69289 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa.scope/container/memory.events
Nov 29 05:24:17 compute-0 podman[216402]: 2025-11-29 05:24:17.642003417 +0000 UTC m=+0.151231250 container died 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:24:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c832cf54ee8f31f27ee57202882839be2752d475145bdbb275889a3dfb279c9-merged.mount: Deactivated successfully.
Nov 29 05:24:17 compute-0 podman[216402]: 2025-11-29 05:24:17.678821311 +0000 UTC m=+0.188049134 container remove 5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_fermat, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 05:24:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:24:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:24:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:24:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:24:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:24:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:24:17 compute-0 systemd[1]: libpod-conmon-5a8d825701377ad69289fd21987c4c99714e40811b1c8db8f41f61068e3d02fa.scope: Deactivated successfully.
Nov 29 05:24:17 compute-0 python3.9[216396]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:24:17 compute-0 systemd[1]: Reloading.
Nov 29 05:24:17 compute-0 podman[216444]: 2025-11-29 05:24:17.84902735 +0000 UTC m=+0.046306780 container create f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:24:17 compute-0 systemd-sysv-generator[216485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:24:17 compute-0 systemd-rc-local-generator[216482]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:24:17 compute-0 podman[216444]: 2025-11-29 05:24:17.825719336 +0000 UTC m=+0.022998776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:24:18 compute-0 systemd[1]: Started libpod-conmon-f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362.scope.
Nov 29 05:24:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:24:18 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 05:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:18 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 05:24:18 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 05:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:18 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 05:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:18 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 05:24:18 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 05:24:18 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 05:24:18 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 05:24:18 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 05:24:18 compute-0 podman[216444]: 2025-11-29 05:24:18.202640102 +0000 UTC m=+0.399919552 container init f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:24:18 compute-0 podman[216444]: 2025-11-29 05:24:18.216763147 +0000 UTC m=+0.414042577 container start f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:24:18 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 05:24:18 compute-0 podman[216444]: 2025-11-29 05:24:18.223582129 +0000 UTC m=+0.420861609 container attach f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 05:24:18 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 05:24:18 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 29 05:24:18 compute-0 sudo[216377]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:18 compute-0 ceph-mon[75176]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:18 compute-0 sudo[216676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cubxuroasjzspilnwdnxyzkmaqwanwxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393858.4750226-1061-233546472126567/AnsiballZ_systemd.py'
Nov 29 05:24:18 compute-0 sudo[216676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:19 compute-0 python3.9[216680]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:24:19 compute-0 systemd[1]: Reloading.
Nov 29 05:24:19 compute-0 xenodochial_diffie[216495]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:24:19 compute-0 xenodochial_diffie[216495]: --> relative data size: 1.0
Nov 29 05:24:19 compute-0 xenodochial_diffie[216495]: --> All data devices are unavailable
Nov 29 05:24:19 compute-0 podman[216444]: 2025-11-29 05:24:19.280956931 +0000 UTC m=+1.478236341 container died f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:24:19 compute-0 systemd-rc-local-generator[216737]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:24:19 compute-0 systemd-sysv-generator[216740]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:24:19 compute-0 systemd[1]: libpod-f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362.scope: Deactivated successfully.
Nov 29 05:24:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-91152eb117e6f7cb7b47d21926675054853e1897b3c980a08db2cbd819e55e8a-merged.mount: Deactivated successfully.
Nov 29 05:24:19 compute-0 podman[216444]: 2025-11-29 05:24:19.577296692 +0000 UTC m=+1.774576092 container remove f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_diffie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:24:19 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 05:24:19 compute-0 systemd[1]: libpod-conmon-f71e9bca07d51fd31858cd579542c9dc61d3f35f5c29daf48f6d3c1826c09362.scope: Deactivated successfully.
Nov 29 05:24:19 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 05:24:19 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 05:24:19 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 05:24:19 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 05:24:19 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 05:24:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:19 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 29 05:24:19 compute-0 sudo[216242]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:19 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 29 05:24:19 compute-0 sudo[216676]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:19 compute-0 sudo[216755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:24:19 compute-0 sudo[216755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:19 compute-0 sudo[216755]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:19 compute-0 sudo[216802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:24:19 compute-0 sudo[216802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:19 compute-0 sudo[216802]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:19 compute-0 sudo[216847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:24:19 compute-0 sudo[216847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:19 compute-0 sudo[216847]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:20 compute-0 sudo[216872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:24:20 compute-0 sudo[216872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:20 compute-0 sudo[217074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzxhjysqhxxmkervgogjdeldbqnmvutv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393860.054246-1098-211195491010308/AnsiballZ_file.py'
Nov 29 05:24:20 compute-0 podman[217052]: 2025-11-29 05:24:20.402728991 +0000 UTC m=+0.038801632 container create 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:24:20 compute-0 sudo[217074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:20 compute-0 systemd[1]: Started libpod-conmon-90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f.scope.
Nov 29 05:24:20 compute-0 podman[217052]: 2025-11-29 05:24:20.385347609 +0000 UTC m=+0.021420240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:24:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:24:20 compute-0 podman[217052]: 2025-11-29 05:24:20.507393365 +0000 UTC m=+0.143466016 container init 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:24:20 compute-0 podman[217052]: 2025-11-29 05:24:20.515463386 +0000 UTC m=+0.151536017 container start 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:24:20 compute-0 podman[217052]: 2025-11-29 05:24:20.518233471 +0000 UTC m=+0.154306132 container attach 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 05:24:20 compute-0 systemd[1]: libpod-90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f.scope: Deactivated successfully.
Nov 29 05:24:20 compute-0 interesting_cannon[217082]: 167 167
Nov 29 05:24:20 compute-0 conmon[217082]: conmon 90a4438b9678e4a0f3b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f.scope/container/memory.events
Nov 29 05:24:20 compute-0 podman[217052]: 2025-11-29 05:24:20.527974862 +0000 UTC m=+0.164047513 container died 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:24:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c1adb0195826ab8cbd635a3484508200fc8dde6f341d68b330d89936c7c086a-merged.mount: Deactivated successfully.
Nov 29 05:24:20 compute-0 podman[217052]: 2025-11-29 05:24:20.570622445 +0000 UTC m=+0.206695086 container remove 90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:24:20 compute-0 systemd[1]: libpod-conmon-90a4438b9678e4a0f3b283fb925e8b4112cfa3cab2cece21ff49de0c03aada4f.scope: Deactivated successfully.
Nov 29 05:24:20 compute-0 python3.9[217079]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:20 compute-0 sudo[217074]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:20 compute-0 podman[217129]: 2025-11-29 05:24:20.771366438 +0000 UTC m=+0.053606512 container create a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:24:20 compute-0 systemd[1]: Started libpod-conmon-a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533.scope.
Nov 29 05:24:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:24:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866b483cf367cb29dda9de98c66e7c8970ceedc77ce5b7856e0ad403b8075699/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866b483cf367cb29dda9de98c66e7c8970ceedc77ce5b7856e0ad403b8075699/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866b483cf367cb29dda9de98c66e7c8970ceedc77ce5b7856e0ad403b8075699/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/866b483cf367cb29dda9de98c66e7c8970ceedc77ce5b7856e0ad403b8075699/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:20 compute-0 podman[217129]: 2025-11-29 05:24:20.84392345 +0000 UTC m=+0.126163504 container init a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 05:24:20 compute-0 podman[217129]: 2025-11-29 05:24:20.751322873 +0000 UTC m=+0.033562937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:24:20 compute-0 podman[217129]: 2025-11-29 05:24:20.851206973 +0000 UTC m=+0.133447027 container start a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 05:24:20 compute-0 podman[217129]: 2025-11-29 05:24:20.854665956 +0000 UTC m=+0.136906020 container attach a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:24:20 compute-0 ceph-mon[75176]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:21 compute-0 sudo[217275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raejmwxtflolliutwldjiptrlemrcmak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393860.808871-1106-47440200875005/AnsiballZ_find.py'
Nov 29 05:24:21 compute-0 sudo[217275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:21 compute-0 python3.9[217277]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 05:24:21 compute-0 sudo[217275]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]: {
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:     "0": [
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:         {
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "devices": [
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "/dev/loop3"
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             ],
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_name": "ceph_lv0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_size": "21470642176",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "name": "ceph_lv0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "tags": {
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.cluster_name": "ceph",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.crush_device_class": "",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.encrypted": "0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.osd_id": "0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.type": "block",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.vdo": "0"
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             },
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "type": "block",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "vg_name": "ceph_vg0"
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:         }
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:     ],
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:     "1": [
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:         {
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "devices": [
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "/dev/loop4"
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             ],
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_name": "ceph_lv1",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_size": "21470642176",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "name": "ceph_lv1",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "tags": {
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.cluster_name": "ceph",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.crush_device_class": "",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.encrypted": "0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.osd_id": "1",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.type": "block",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.vdo": "0"
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             },
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "type": "block",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "vg_name": "ceph_vg1"
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:         }
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:     ],
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:     "2": [
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:         {
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "devices": [
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "/dev/loop5"
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             ],
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_name": "ceph_lv2",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_size": "21470642176",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "name": "ceph_lv2",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "tags": {
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.cluster_name": "ceph",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.crush_device_class": "",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.encrypted": "0",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.osd_id": "2",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.type": "block",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:                 "ceph.vdo": "0"
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             },
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "type": "block",
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:             "vg_name": "ceph_vg2"
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:         }
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]:     ]
Nov 29 05:24:21 compute-0 gifted_lichterman[217168]: }
Nov 29 05:24:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:21 compute-0 systemd[1]: libpod-a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533.scope: Deactivated successfully.
Nov 29 05:24:21 compute-0 podman[217129]: 2025-11-29 05:24:21.628617202 +0000 UTC m=+0.910857336 container died a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:24:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-866b483cf367cb29dda9de98c66e7c8970ceedc77ce5b7856e0ad403b8075699-merged.mount: Deactivated successfully.
Nov 29 05:24:21 compute-0 podman[217129]: 2025-11-29 05:24:21.698126891 +0000 UTC m=+0.980366935 container remove a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 05:24:21 compute-0 systemd[1]: libpod-conmon-a87d7d77ae5c864942b9bdd82310fcc0d5312e08e981e8491de0911956a99533.scope: Deactivated successfully.
Nov 29 05:24:21 compute-0 sudo[216872]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:21 compute-0 sudo[217410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:24:21 compute-0 sudo[217410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:21 compute-0 sudo[217410]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:21 compute-0 sudo[217475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulzpfchufhtpsmugvitwirwqpzcjlcof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393861.5232031-1114-74965923204886/AnsiballZ_command.py'
Nov 29 05:24:21 compute-0 sudo[217475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:21 compute-0 sudo[217461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:24:21 compute-0 sudo[217461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:21 compute-0 sudo[217461]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:21 compute-0 sudo[217495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:24:21 compute-0 sudo[217495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:21 compute-0 sudo[217495]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:22 compute-0 python3.9[217489]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:24:22 compute-0 sudo[217475]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:22 compute-0 sudo[217520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:24:22 compute-0 sudo[217520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:22 compute-0 sshd-session[215175]: error: kex_exchange_identification: read: Connection timed out
Nov 29 05:24:22 compute-0 sshd-session[215175]: banner exchange: Connection from 120.48.175.69 port 59438: Connection timed out
Nov 29 05:24:22 compute-0 podman[217637]: 2025-11-29 05:24:22.499782225 +0000 UTC m=+0.066735075 container create 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:24:22 compute-0 systemd[1]: Started libpod-conmon-687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148.scope.
Nov 29 05:24:22 compute-0 podman[217637]: 2025-11-29 05:24:22.468629485 +0000 UTC m=+0.035582375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:24:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:24:22 compute-0 podman[217637]: 2025-11-29 05:24:22.606965638 +0000 UTC m=+0.173918528 container init 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:24:22 compute-0 podman[217637]: 2025-11-29 05:24:22.618922182 +0000 UTC m=+0.185875012 container start 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:24:22 compute-0 podman[217637]: 2025-11-29 05:24:22.622294172 +0000 UTC m=+0.189247072 container attach 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:24:22 compute-0 kind_kilby[217689]: 167 167
Nov 29 05:24:22 compute-0 systemd[1]: libpod-687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148.scope: Deactivated successfully.
Nov 29 05:24:22 compute-0 podman[217637]: 2025-11-29 05:24:22.626925333 +0000 UTC m=+0.193878173 container died 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:24:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3133ca2d5e1950a0d5399609e4a551806d9dbda54bdeda3195f2b7615e6db010-merged.mount: Deactivated successfully.
Nov 29 05:24:22 compute-0 podman[217637]: 2025-11-29 05:24:22.681352613 +0000 UTC m=+0.248305453 container remove 687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_kilby, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:24:22 compute-0 systemd[1]: libpod-conmon-687dc7952a56264db210c6a48d463fbb122a2f1dbeb2664d524e26cdbec25148.scope: Deactivated successfully.
Nov 29 05:24:22 compute-0 ceph-mon[75176]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:22 compute-0 podman[217780]: 2025-11-29 05:24:22.902423229 +0000 UTC m=+0.055668911 container create a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 05:24:22 compute-0 python3.9[217774]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 05:24:22 compute-0 systemd[1]: Started libpod-conmon-a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68.scope.
Nov 29 05:24:22 compute-0 podman[217780]: 2025-11-29 05:24:22.876489085 +0000 UTC m=+0.029734847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:24:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde80a9c37ae0495a96fb6f7d81a03ba9adf0b23754e37609d081c1035531500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde80a9c37ae0495a96fb6f7d81a03ba9adf0b23754e37609d081c1035531500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde80a9c37ae0495a96fb6f7d81a03ba9adf0b23754e37609d081c1035531500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde80a9c37ae0495a96fb6f7d81a03ba9adf0b23754e37609d081c1035531500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:24:23 compute-0 podman[217780]: 2025-11-29 05:24:23.025730806 +0000 UTC m=+0.178976598 container init a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 05:24:23 compute-0 podman[217780]: 2025-11-29 05:24:23.03852982 +0000 UTC m=+0.191775502 container start a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 05:24:23 compute-0 podman[217780]: 2025-11-29 05:24:23.042079284 +0000 UTC m=+0.195325006 container attach a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:24:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:23 compute-0 python3.9[217958]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]: {
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "osd_id": 0,
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "type": "bluestore"
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:     },
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "osd_id": 1,
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "type": "bluestore"
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:     },
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "osd_id": 2,
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:         "type": "bluestore"
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]:     }
Nov 29 05:24:24 compute-0 hopeful_herschel[217796]: }
Nov 29 05:24:24 compute-0 systemd[1]: libpod-a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68.scope: Deactivated successfully.
Nov 29 05:24:24 compute-0 systemd[1]: libpod-a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68.scope: Consumed 1.096s CPU time.
Nov 29 05:24:24 compute-0 podman[217780]: 2025-11-29 05:24:24.13499253 +0000 UTC m=+1.288238212 container died a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:24:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-bde80a9c37ae0495a96fb6f7d81a03ba9adf0b23754e37609d081c1035531500-merged.mount: Deactivated successfully.
Nov 29 05:24:24 compute-0 podman[217780]: 2025-11-29 05:24:24.196721715 +0000 UTC m=+1.349967427 container remove a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_herschel, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:24:24 compute-0 systemd[1]: libpod-conmon-a28426abcdfc1592bfb2c915659a9fd0ac528152e42548da9d336a67c286da68.scope: Deactivated successfully.
Nov 29 05:24:24 compute-0 sudo[217520]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:24:24 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:24:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:24:24 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:24:24 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 9db73650-75a4-4165-bc3d-b2338aa3177b does not exist
Nov 29 05:24:24 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 5e59ea50-b21e-4abf-8ec6-ba181db911c4 does not exist
Nov 29 05:24:24 compute-0 sudo[218063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:24:24 compute-0 sudo[218063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:24 compute-0 sudo[218063]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:24 compute-0 sudo[218112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:24:24 compute-0 sudo[218112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:24:24 compute-0 sudo[218112]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:24 compute-0 python3.9[218161]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393863.3929856-1133-194156182925601/.source.xml follow=False _original_basename=secret.xml.j2 checksum=6a747b6a02a8b21427ead7222f3616a6bd64ba4d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:24 compute-0 ceph-mon[75176]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:24:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:24:25 compute-0 sudo[218313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byiwbnynozzqmvxefqjhixpljbtvyrma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393864.8963113-1148-133449294938564/AnsiballZ_command.py'
Nov 29 05:24:25 compute-0 sudo[218313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:25 compute-0 python3.9[218315]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 93f82912-647c-5e78-b081-707d0a2966d8
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:24:25 compute-0 polkitd[43510]: Registered Authentication Agent for unix-process:218317:297809 (system bus name :1.2882 [pkttyagent --process 218317 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 05:24:25 compute-0 polkitd[43510]: Unregistered Authentication Agent for unix-process:218317:297809 (system bus name :1.2882, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 05:24:25 compute-0 polkitd[43510]: Registered Authentication Agent for unix-process:218316:297808 (system bus name :1.2883 [pkttyagent --process 218316 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 05:24:25 compute-0 polkitd[43510]: Unregistered Authentication Agent for unix-process:218316:297808 (system bus name :1.2883, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 05:24:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:26 compute-0 sudo[218313]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:26 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 05:24:26 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 05:24:26 compute-0 ceph-mon[75176]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:27 compute-0 python3.9[218477]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:27 compute-0 sshd-session[217968]: Invalid user g from 101.47.141.125 port 52828
Nov 29 05:24:27 compute-0 sudo[218627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtefaytzqojntxnsdxrblsbzoiqucjjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393867.5453143-1164-12003055198458/AnsiballZ_command.py'
Nov 29 05:24:27 compute-0 sudo[218627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:27 compute-0 sshd-session[217968]: Received disconnect from 101.47.141.125 port 52828:11: Bye Bye [preauth]
Nov 29 05:24:27 compute-0 sshd-session[217968]: Disconnected from invalid user g 101.47.141.125 port 52828 [preauth]
Nov 29 05:24:28 compute-0 sudo[218627]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:28 compute-0 sudo[218780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcnybrvtmtfxphhxqevoddyyjvwwaomp ; FSID=93f82912-647c-5e78-b081-707d0a2966d8 KEY=AQCLfyppAAAAABAAXOcH7jxI2CDW0wmPcSvJrA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393868.2669444-1172-239743615747623/AnsiballZ_command.py'
Nov 29 05:24:28 compute-0 sudo[218780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:28 compute-0 polkitd[43510]: Registered Authentication Agent for unix-process:218783:298143 (system bus name :1.2886 [pkttyagent --process 218783 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 29 05:24:28 compute-0 polkitd[43510]: Unregistered Authentication Agent for unix-process:218783:298143 (system bus name :1.2886, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 29 05:24:28 compute-0 sudo[218780]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:28 compute-0 ceph-mon[75176]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:29 compute-0 sudo[218938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkxvfylpkgnqqmivyprvdgochbwjssdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393869.120476-1180-198610257045897/AnsiballZ_copy.py'
Nov 29 05:24:29 compute-0 sudo[218938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:29 compute-0 python3.9[218940]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:29 compute-0 sudo[218938]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:30 compute-0 sudo[219090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwftdskvjixjiqqvhrnrrpmswikqhcjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393869.9537823-1188-44964661678616/AnsiballZ_stat.py'
Nov 29 05:24:30 compute-0 sudo[219090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:30 compute-0 python3.9[219092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:30 compute-0 sudo[219090]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:30 compute-0 ceph-mon[75176]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:31 compute-0 sudo[219213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkvyxbrxogudilzysifeqsjwmhwoeddy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393869.9537823-1188-44964661678616/AnsiballZ_copy.py'
Nov 29 05:24:31 compute-0 sudo[219213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:31 compute-0 python3.9[219215]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393869.9537823-1188-44964661678616/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:31 compute-0 sudo[219213]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:32 compute-0 sudo[219366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hauzervrhijxhlbluzvvpdgmkqorkrrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393871.690273-1204-139045580408016/AnsiballZ_file.py'
Nov 29 05:24:32 compute-0 sudo[219366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:32 compute-0 python3.9[219368]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:32 compute-0 sudo[219366]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:32 compute-0 sudo[219518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irpckncogwgdrqrlfnqeumasciytlsyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393872.5468707-1212-45976764067571/AnsiballZ_stat.py'
Nov 29 05:24:32 compute-0 sudo[219518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:32 compute-0 ceph-mon[75176]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:33 compute-0 python3.9[219520]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:33 compute-0 sudo[219518]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:33 compute-0 sudo[219596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddrjpqkqlzwjsprlthcgurophvcpcmok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393872.5468707-1212-45976764067571/AnsiballZ_file.py'
Nov 29 05:24:33 compute-0 sudo[219596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:33 compute-0 python3.9[219598]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:33 compute-0 sudo[219596]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:34 compute-0 sudo[219748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsmjhonqqzuwshrlbnfrensawjjzfoaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393873.8774524-1224-172867470527635/AnsiballZ_stat.py'
Nov 29 05:24:34 compute-0 sudo[219748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:34 compute-0 python3.9[219750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:34 compute-0 sudo[219748]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:34 compute-0 sudo[219826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yybotdikcqchevjkryeqchdntwddszrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393873.8774524-1224-172867470527635/AnsiballZ_file.py'
Nov 29 05:24:34 compute-0 sudo[219826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:34 compute-0 ceph-mon[75176]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:35 compute-0 python3.9[219828]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ljyenoij recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:35 compute-0 sudo[219826]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:35 compute-0 sudo[219978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhnjigcttovueovxjdaagfeilmcabzvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393875.257404-1236-269244080704038/AnsiballZ_stat.py'
Nov 29 05:24:35 compute-0 sudo[219978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:35 compute-0 python3.9[219980]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:35 compute-0 sudo[219978]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:36 compute-0 sudo[220056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdnejchuknpvhnmvlxkfsryejzcwhtjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393875.257404-1236-269244080704038/AnsiballZ_file.py'
Nov 29 05:24:36 compute-0 sudo[220056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:36 compute-0 python3.9[220058]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:36 compute-0 sudo[220056]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:36 compute-0 ceph-mon[75176]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:37 compute-0 podman[220136]: 2025-11-29 05:24:37.091217018 +0000 UTC m=+0.128292277 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 05:24:37 compute-0 sudo[220235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfivzoxicchlwkbrlwnabbrpfvahfjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393876.7589967-1249-143811344895587/AnsiballZ_command.py'
Nov 29 05:24:37 compute-0 sudo[220235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:37 compute-0 python3.9[220237]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:24:37 compute-0 sudo[220235]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:38 compute-0 sudo[220388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zurscsowmeyrravftyidthmiifktbyer ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764393877.749211-1257-192557314511054/AnsiballZ_edpm_nftables_from_files.py'
Nov 29 05:24:38 compute-0 sudo[220388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:38 compute-0 python3[220390]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 05:24:38 compute-0 sudo[220388]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:38 compute-0 ceph-mon[75176]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:39 compute-0 sudo[220540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxskqhvsqoluditncqkggajxcexdkkmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393878.8801935-1265-195008119590067/AnsiballZ_stat.py'
Nov 29 05:24:39 compute-0 sudo[220540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:39 compute-0 python3.9[220542]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:39 compute-0 sudo[220540]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:39 compute-0 sudo[220618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcfqyoxndsgocadzqelenvddetybabpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393878.8801935-1265-195008119590067/AnsiballZ_file.py'
Nov 29 05:24:39 compute-0 sudo[220618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:40 compute-0 python3.9[220620]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:40 compute-0 sudo[220618]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:40 compute-0 sudo[220770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtpxrmgxtoozsjelzdgglesstgpghxmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393880.3572073-1277-248587355106227/AnsiballZ_stat.py'
Nov 29 05:24:40 compute-0 sudo[220770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:40 compute-0 ceph-mon[75176]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:41 compute-0 python3.9[220772]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:41 compute-0 sudo[220770]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:24:41
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes', '.mgr', 'default.rgw.log', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:24:41 compute-0 sudo[220848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwccklglbucbgxbtflrymkvkexheptqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393880.3572073-1277-248587355106227/AnsiballZ_file.py'
Nov 29 05:24:41 compute-0 sudo[220848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:41 compute-0 python3.9[220850]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:41 compute-0 sudo[220848]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:42 compute-0 sshd-session[219240]: error: kex_exchange_identification: read: Connection timed out
Nov 29 05:24:42 compute-0 sshd-session[219240]: banner exchange: Connection from 120.48.175.69 port 35276: Connection timed out
Nov 29 05:24:42 compute-0 sudo[221000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjpivvypayyvuzxwopopunqlvkqqzwnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393882.0147243-1289-55781948689437/AnsiballZ_stat.py'
Nov 29 05:24:42 compute-0 sudo[221000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:42 compute-0 python3.9[221002]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:42 compute-0 sudo[221000]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:42 compute-0 ceph-mon[75176]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:43 compute-0 sudo[221078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrvfecnbcxzwejuaoxfxpqmjfzdcedfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393882.0147243-1289-55781948689437/AnsiballZ_file.py'
Nov 29 05:24:43 compute-0 sudo[221078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:43 compute-0 python3.9[221080]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:43 compute-0 sudo[221078]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:43 compute-0 sudo[221230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeawhyzetckgzyvqxduuihsszcxmgtxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393883.479303-1301-65483902829638/AnsiballZ_stat.py'
Nov 29 05:24:43 compute-0 sudo[221230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:44 compute-0 python3.9[221232]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:44 compute-0 sudo[221230]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:44 compute-0 sudo[221308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpzoklshucavozarjuvjdubncisvrcnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393883.479303-1301-65483902829638/AnsiballZ_file.py'
Nov 29 05:24:44 compute-0 sudo[221308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:44 compute-0 python3.9[221310]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:44 compute-0 sudo[221308]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:44 compute-0 ceph-mon[75176]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:45 compute-0 sudo[221460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibicwhnyyswivkujngofmnepshnfhqtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393884.902703-1313-175715023370737/AnsiballZ_stat.py'
Nov 29 05:24:45 compute-0 sudo[221460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:45 compute-0 python3.9[221462]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:45 compute-0 sudo[221460]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:45 compute-0 sudo[221585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmxdmrvnxhlawqlaauzoonbdjpkosneu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393884.902703-1313-175715023370737/AnsiballZ_copy.py'
Nov 29 05:24:45 compute-0 sudo[221585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:46 compute-0 python3.9[221587]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764393884.902703-1313-175715023370737/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:46 compute-0 sudo[221585]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:46 compute-0 sudo[221737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aekbqnrippjthplglovfdfyunshmginf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393886.2381139-1328-253027011770638/AnsiballZ_file.py'
Nov 29 05:24:46 compute-0 sudo[221737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:46 compute-0 python3.9[221739]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:46 compute-0 sudo[221737]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:47 compute-0 ceph-mon[75176]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:47 compute-0 sudo[221889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jltpcfcorllwvyxuelygvpuhjkholxmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393887.065752-1336-116078561609256/AnsiballZ_command.py'
Nov 29 05:24:47 compute-0 sudo[221889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:47 compute-0 podman[221891]: 2025-11-29 05:24:47.613001214 +0000 UTC m=+0.084508062 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 05:24:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:47 compute-0 python3.9[221892]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:24:47 compute-0 sudo[221889]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:48 compute-0 sudo[222063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itaknqojpacvsamsnfqvwpskhiifybbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393887.994999-1344-15817599094026/AnsiballZ_blockinfile.py'
Nov 29 05:24:48 compute-0 sudo[222063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:48 compute-0 python3.9[222065]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:48 compute-0 sudo[222063]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:49 compute-0 ceph-mon[75176]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:49 compute-0 sudo[222215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxcuiypgqwaiwiqiwkumstvdumumfovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393889.0025127-1353-1869166750945/AnsiballZ_command.py'
Nov 29 05:24:49 compute-0 sudo[222215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:49 compute-0 python3.9[222217]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:24:49 compute-0 sudo[222215]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:50 compute-0 ceph-mon[75176]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:50 compute-0 sudo[222368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edyedgbiffvcwtivykyzduyzhcanetzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393889.7586555-1361-118495885310599/AnsiballZ_stat.py'
Nov 29 05:24:50 compute-0 sudo[222368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:50 compute-0 python3.9[222370]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:24:50 compute-0 sudo[222368]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:51 compute-0 sudo[222523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nadmudobqcwaowxgbvtpizmsescadpcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393890.7631927-1369-189242985926788/AnsiballZ_command.py'
Nov 29 05:24:51 compute-0 sudo[222523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:24:51 compute-0 python3.9[222525]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:24:51 compute-0 sudo[222523]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:52 compute-0 sudo[222678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjsxwollqddiyldlisqzxzwgyhlbpllg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393891.6568522-1377-208218343731204/AnsiballZ_file.py'
Nov 29 05:24:52 compute-0 sudo[222678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:52 compute-0 python3.9[222680]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:52 compute-0 sudo[222678]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:52 compute-0 ceph-mon[75176]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:52 compute-0 sudo[222830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huvukiouktvtavsigzjzaofyljmdryzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393892.425085-1385-269280242915304/AnsiballZ_stat.py'
Nov 29 05:24:52 compute-0 sudo[222830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:53 compute-0 python3.9[222832]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:53 compute-0 sudo[222830]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:53 compute-0 sudo[222953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqtaerkppxnbgrsurfbukdnxohrzochk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393892.425085-1385-269280242915304/AnsiballZ_copy.py'
Nov 29 05:24:53 compute-0 sudo[222953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:53 compute-0 python3.9[222955]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393892.425085-1385-269280242915304/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:53 compute-0 sudo[222953]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:54 compute-0 sudo[223105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbeprdgqygcibpyarozmsocobuvowooe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393893.933264-1400-9072851297326/AnsiballZ_stat.py'
Nov 29 05:24:54 compute-0 sudo[223105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:54 compute-0 python3.9[223107]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:54 compute-0 sudo[223105]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:54 compute-0 ceph-mon[75176]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:24:55 compute-0 sudo[223228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkjpmtheplpqsawvsoyjnijjklkkojss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393893.933264-1400-9072851297326/AnsiballZ_copy.py'
Nov 29 05:24:55 compute-0 sudo[223228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:55 compute-0 python3.9[223230]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393893.933264-1400-9072851297326/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:55 compute-0 sudo[223228]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:55 compute-0 sudo[223380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waicgpcevtychrbpmgwjspwzqnaiaizy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393895.577155-1415-364850219449/AnsiballZ_stat.py'
Nov 29 05:24:55 compute-0 sudo[223380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:56 compute-0 python3.9[223382]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:24:56 compute-0 sudo[223380]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:56 compute-0 sudo[223503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nktxcvknoddqyjoundbtgcodtvormeaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393895.577155-1415-364850219449/AnsiballZ_copy.py'
Nov 29 05:24:56 compute-0 sudo[223503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:56 compute-0 ceph-mon[75176]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:56 compute-0 python3.9[223505]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393895.577155-1415-364850219449/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:24:56 compute-0 sudo[223503]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:57 compute-0 sudo[223655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxwkyenaodtmjafdfqubkrgnjqmsnuvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393896.9594514-1430-242364674586759/AnsiballZ_systemd.py'
Nov 29 05:24:57 compute-0 sudo[223655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:57 compute-0 python3.9[223657]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:24:57 compute-0 systemd[1]: Reloading.
Nov 29 05:24:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:57 compute-0 systemd-sysv-generator[223687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:24:57 compute-0 systemd-rc-local-generator[223683]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:24:57 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 05:24:57 compute-0 sudo[223655]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:58 compute-0 sudo[223845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnbbhqbgvwkkswimvshnwiafjdrilkpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393898.1341894-1438-181912507466285/AnsiballZ_systemd.py'
Nov 29 05:24:58 compute-0 sudo[223845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:24:58 compute-0 ceph-mon[75176]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:58 compute-0 python3.9[223847]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 05:24:58 compute-0 systemd[1]: Reloading.
Nov 29 05:24:58 compute-0 systemd-sysv-generator[223876]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:24:58 compute-0 systemd-rc-local-generator[223871]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:24:59 compute-0 systemd[1]: Reloading.
Nov 29 05:24:59 compute-0 systemd-sysv-generator[223913]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:24:59 compute-0 systemd-rc-local-generator[223909]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:24:59 compute-0 sudo[223845]: pam_unix(sudo:session): session closed for user root
Nov 29 05:24:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:24:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:00 compute-0 sshd-session[164096]: Connection closed by 192.168.122.30 port 56172
Nov 29 05:25:00 compute-0 sshd-session[164093]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:25:00 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 29 05:25:00 compute-0 systemd[1]: session-48.scope: Consumed 3min 44.583s CPU time.
Nov 29 05:25:00 compute-0 systemd-logind[793]: Session 48 logged out. Waiting for processes to exit.
Nov 29 05:25:00 compute-0 systemd-logind[793]: Removed session 48.
Nov 29 05:25:00 compute-0 ceph-mon[75176]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:01 compute-0 sshd-session[222410]: error: kex_exchange_identification: read: Connection timed out
Nov 29 05:25:01 compute-0 sshd-session[222410]: banner exchange: Connection from 120.48.175.69 port 39066: Connection timed out
Nov 29 05:25:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:02 compute-0 ceph-mon[75176]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:03 compute-0 sshd-session[223944]: Invalid user cc from 45.120.216.232 port 46526
Nov 29 05:25:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:03 compute-0 sshd-session[223944]: Received disconnect from 45.120.216.232 port 46526:11: Bye Bye [preauth]
Nov 29 05:25:03 compute-0 sshd-session[223944]: Disconnected from invalid user cc 45.120.216.232 port 46526 [preauth]
Nov 29 05:25:04 compute-0 ceph-mon[75176]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:06 compute-0 sshd-session[223946]: Accepted publickey for zuul from 192.168.122.30 port 34648 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:25:06 compute-0 systemd-logind[793]: New session 49 of user zuul.
Nov 29 05:25:06 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 29 05:25:06 compute-0 sshd-session[223946]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:25:06 compute-0 ceph-mon[75176]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:07 compute-0 podman[224073]: 2025-11-29 05:25:07.342677123 +0000 UTC m=+0.110854477 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 05:25:07 compute-0 python3.9[224112]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:25:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:08 compute-0 ceph-mon[75176]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:08 compute-0 python3.9[224282]: ansible-ansible.builtin.service_facts Invoked
Nov 29 05:25:09 compute-0 network[224300]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 05:25:09 compute-0 network[224301]: 'network-scripts' will be removed from distribution in near future.
Nov 29 05:25:09 compute-0 network[224302]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 05:25:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:10 compute-0 ceph-mon[75176]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:25:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:25:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:25:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:25:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:25:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:25:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:12 compute-0 ceph-mon[75176]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:13 compute-0 sudo[224572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjggdlyfknizotzeahziuwpxtvvroarv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393913.1842732-47-6223275799404/AnsiballZ_setup.py'
Nov 29 05:25:13 compute-0 sudo[224572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:25:13.735 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:25:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:25:13.737 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:25:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:25:13.737 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:25:13 compute-0 python3.9[224574]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 05:25:14 compute-0 sudo[224572]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:14 compute-0 ceph-mon[75176]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:14 compute-0 sudo[224656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pygxnfugnydimfunilgjfknqdrpojrmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393913.1842732-47-6223275799404/AnsiballZ_dnf.py'
Nov 29 05:25:14 compute-0 sudo[224656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:15 compute-0 python3.9[224658]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:25:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:16 compute-0 sshd[190545]: Timeout before authentication for connection from 120.48.175.69 to 38.102.83.17, pid = 203840
Nov 29 05:25:16 compute-0 ceph-mon[75176]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:18 compute-0 podman[224660]: 2025-11-29 05:25:18.008314853 +0000 UTC m=+0.065870614 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 05:25:18 compute-0 ceph-mon[75176]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:19 compute-0 sshd-session[224283]: error: kex_exchange_identification: read: Connection timed out
Nov 29 05:25:19 compute-0 sshd-session[224283]: banner exchange: Connection from 120.48.175.69 port 43334: Connection timed out
Nov 29 05:25:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:20 compute-0 sudo[224656]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:20 compute-0 ceph-mon[75176]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:20 compute-0 sudo[224829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jujsuqfftrdajrhkgxzwultlhqciqens ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393920.3218436-59-60585228474371/AnsiballZ_stat.py'
Nov 29 05:25:20 compute-0 sudo[224829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:21 compute-0 python3.9[224831]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:25:21 compute-0 sudo[224829]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:21 compute-0 sudo[224981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsxarnyrnqroxzgknqqpnxxwdkpjkobl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393921.401056-69-269232412492185/AnsiballZ_command.py'
Nov 29 05:25:21 compute-0 sudo[224981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:22 compute-0 python3.9[224983]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:25:22 compute-0 sudo[224981]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:22 compute-0 ceph-mon[75176]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:22 compute-0 sudo[225134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaqrhcbkjvvmpbxskwlwbstjcwsnbuxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393922.459051-79-72973393266791/AnsiballZ_stat.py'
Nov 29 05:25:22 compute-0 sudo[225134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:23 compute-0 python3.9[225136]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:25:23 compute-0 sudo[225134]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:23 compute-0 sudo[225286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lczdwikcjsncrdaplazjpwqcgwritoza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393923.3456874-87-259263255523757/AnsiballZ_command.py'
Nov 29 05:25:23 compute-0 sudo[225286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:23 compute-0 python3.9[225288]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:25:23 compute-0 sudo[225286]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:24 compute-0 sudo[225389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:25:24 compute-0 sudo[225389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:24 compute-0 sudo[225389]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:24 compute-0 sudo[225431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:25:24 compute-0 sudo[225431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:24 compute-0 sudo[225431]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:24 compute-0 sudo[225502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzioyosjphnnrlxqzouinmwlfzpjewbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393924.2416449-95-22764366444102/AnsiballZ_stat.py'
Nov 29 05:25:24 compute-0 sudo[225502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:24 compute-0 sudo[225474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:25:24 compute-0 sudo[225474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:24 compute-0 sudo[225474]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:24 compute-0 sudo[225517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:25:24 compute-0 sudo[225517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:24 compute-0 ceph-mon[75176]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:24 compute-0 python3.9[225514]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:25:24 compute-0 sudo[225502]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:25 compute-0 sudo[225517]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:25:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:25:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:25:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:25:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:25:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:25:25 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 46a12d9b-c7b4-48cd-9f6b-5ec3dac912e0 does not exist
Nov 29 05:25:25 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 4a3ae231-556f-442f-8435-19350988ebc3 does not exist
Nov 29 05:25:25 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 88fbd292-bdc0-4b31-ae16-7731ae8cd413 does not exist
Nov 29 05:25:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:25:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:25:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:25:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:25:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:25:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:25:25 compute-0 sudo[225666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:25:25 compute-0 sudo[225666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:25 compute-0 sudo[225666]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:25 compute-0 sudo[225717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvrprixfrktdcrzmodaayqsyqkguapa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393924.2416449-95-22764366444102/AnsiballZ_copy.py'
Nov 29 05:25:25 compute-0 sudo[225717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:25 compute-0 sudo[225721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:25:25 compute-0 sudo[225721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:25 compute-0 sudo[225721]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:25 compute-0 sudo[225746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:25:25 compute-0 sudo[225746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:25 compute-0 sudo[225746]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:25 compute-0 python3.9[225720]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393924.2416449-95-22764366444102/.source.iscsi _original_basename=.f2km9x74 follow=False checksum=1dbba20fb5a1b47e97ac8ad50a96437d1e78147b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:25 compute-0 sudo[225771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:25:25 compute-0 sudo[225771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:25 compute-0 sudo[225717]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:25:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:25:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:25:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:25:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:25:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:25:25 compute-0 podman[225896]: 2025-11-29 05:25:25.994631409 +0000 UTC m=+0.057037951 container create 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 05:25:26 compute-0 systemd[1]: Started libpod-conmon-7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe.scope.
Nov 29 05:25:26 compute-0 podman[225896]: 2025-11-29 05:25:25.968990979 +0000 UTC m=+0.031397611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:25:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:25:26 compute-0 podman[225896]: 2025-11-29 05:25:26.085346812 +0000 UTC m=+0.147753394 container init 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 05:25:26 compute-0 podman[225896]: 2025-11-29 05:25:26.096247803 +0000 UTC m=+0.158654345 container start 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:25:26 compute-0 podman[225896]: 2025-11-29 05:25:26.100099561 +0000 UTC m=+0.162506103 container attach 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:25:26 compute-0 great_kepler[225930]: 167 167
Nov 29 05:25:26 compute-0 systemd[1]: libpod-7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe.scope: Deactivated successfully.
Nov 29 05:25:26 compute-0 conmon[225930]: conmon 7617823e2dc0fb412264 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe.scope/container/memory.events
Nov 29 05:25:26 compute-0 podman[225896]: 2025-11-29 05:25:26.106452828 +0000 UTC m=+0.168859370 container died 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a5896b736dc73252fc2961ad72e9039df518d50a41a58b7828652753f10a2b7-merged.mount: Deactivated successfully.
Nov 29 05:25:26 compute-0 podman[225896]: 2025-11-29 05:25:26.15621828 +0000 UTC m=+0.218624822 container remove 7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kepler, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:25:26 compute-0 systemd[1]: libpod-conmon-7617823e2dc0fb412264e028fc1bb083fb61ff869412ad94fb5fe14148c1fffe.scope: Deactivated successfully.
Nov 29 05:25:26 compute-0 podman[225979]: 2025-11-29 05:25:26.356603563 +0000 UTC m=+0.054917062 container create 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:25:26 compute-0 systemd[1]: Started libpod-conmon-7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803.scope.
Nov 29 05:25:26 compute-0 podman[225979]: 2025-11-29 05:25:26.337393683 +0000 UTC m=+0.035707262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:25:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:26 compute-0 sudo[226050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjkyayermlrqrdeehzaakmwyryhziuch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393925.8819516-110-210081936531445/AnsiballZ_file.py'
Nov 29 05:25:26 compute-0 sudo[226050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:26 compute-0 podman[225979]: 2025-11-29 05:25:26.473523479 +0000 UTC m=+0.171837008 container init 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:25:26 compute-0 podman[225979]: 2025-11-29 05:25:26.486604979 +0000 UTC m=+0.184918518 container start 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 05:25:26 compute-0 podman[225979]: 2025-11-29 05:25:26.491433891 +0000 UTC m=+0.189747390 container attach 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:25:26 compute-0 python3.9[226052]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:26 compute-0 sudo[226050]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:26 compute-0 ceph-mon[75176]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:27 compute-0 sudo[226212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwkbqjoqksdrhkwhgnuvonfvtwpzidgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393926.8116202-118-116635955238715/AnsiballZ_lineinfile.py'
Nov 29 05:25:27 compute-0 sudo[226212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:27 compute-0 python3.9[226214]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:27 compute-0 sudo[226212]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:27 compute-0 optimistic_kirch[226036]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:25:27 compute-0 optimistic_kirch[226036]: --> relative data size: 1.0
Nov 29 05:25:27 compute-0 optimistic_kirch[226036]: --> All data devices are unavailable
Nov 29 05:25:27 compute-0 systemd[1]: libpod-7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803.scope: Deactivated successfully.
Nov 29 05:25:27 compute-0 podman[225979]: 2025-11-29 05:25:27.642288345 +0000 UTC m=+1.340601834 container died 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:25:27 compute-0 systemd[1]: libpod-7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803.scope: Consumed 1.084s CPU time.
Nov 29 05:25:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:27 compute-0 sshd[190545]: drop connection #1 from [120.48.175.69]:47012 on [38.102.83.17]:22 penalty: connections without attempting authentication
Nov 29 05:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e597067be139dcac84d700a3b22df09076c3f9bb3f887d67d57e51985750c56b-merged.mount: Deactivated successfully.
Nov 29 05:25:27 compute-0 podman[225979]: 2025-11-29 05:25:27.819448875 +0000 UTC m=+1.517762404 container remove 7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:25:27 compute-0 systemd[1]: libpod-conmon-7ceb84b0378ccce50c82648cd148282fe62d256dc8d0f125d00caacb923e3803.scope: Deactivated successfully.
Nov 29 05:25:27 compute-0 sudo[225771]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:27 compute-0 sudo[226297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:25:27 compute-0 sudo[226297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:27 compute-0 sudo[226297]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:27 compute-0 sudo[226346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:25:27 compute-0 sudo[226346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:27 compute-0 sudo[226346]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:28 compute-0 sudo[226371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:25:28 compute-0 sudo[226371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:28 compute-0 sudo[226371]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:28 compute-0 sudo[226396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:25:28 compute-0 sudo[226396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:28 compute-0 sudo[226534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyxxfzbnldpzsvzqxrwyreqtxqsbuxlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393927.8259623-127-486348896877/AnsiballZ_systemd_service.py'
Nov 29 05:25:28 compute-0 sudo[226534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:28 compute-0 podman[226535]: 2025-11-29 05:25:28.583883274 +0000 UTC m=+0.048518596 container create 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:25:28 compute-0 systemd[1]: Started libpod-conmon-9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8.scope.
Nov 29 05:25:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:25:28 compute-0 podman[226535]: 2025-11-29 05:25:28.657608378 +0000 UTC m=+0.122243740 container init 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:25:28 compute-0 podman[226535]: 2025-11-29 05:25:28.565062682 +0000 UTC m=+0.029698044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:25:28 compute-0 podman[226535]: 2025-11-29 05:25:28.667864743 +0000 UTC m=+0.132500055 container start 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:25:28 compute-0 podman[226535]: 2025-11-29 05:25:28.671913676 +0000 UTC m=+0.136549028 container attach 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 05:25:28 compute-0 kind_noether[226554]: 167 167
Nov 29 05:25:28 compute-0 systemd[1]: libpod-9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8.scope: Deactivated successfully.
Nov 29 05:25:28 compute-0 podman[226535]: 2025-11-29 05:25:28.674937196 +0000 UTC m=+0.139572508 container died 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:25:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b0a904cad86c7db57501565daadf43bb7182743c14c9dadc818be633bc082fe-merged.mount: Deactivated successfully.
Nov 29 05:25:28 compute-0 podman[226535]: 2025-11-29 05:25:28.715911296 +0000 UTC m=+0.180546608 container remove 9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 05:25:28 compute-0 systemd[1]: libpod-conmon-9fc7713e1ea27f76b835f668ea8509979bec3ae4d6a3a08ddc4067b3955a9ec8.scope: Deactivated successfully.
Nov 29 05:25:28 compute-0 python3.9[226543]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:25:28 compute-0 ceph-mon[75176]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:28 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 05:25:28 compute-0 sudo[226534]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:28 compute-0 podman[226581]: 2025-11-29 05:25:28.9415764 +0000 UTC m=+0.070348757 container create fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:25:28 compute-0 systemd[1]: Started libpod-conmon-fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03.scope.
Nov 29 05:25:29 compute-0 podman[226581]: 2025-11-29 05:25:28.917117438 +0000 UTC m=+0.045889575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:25:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a21d52a8113d684885e204000002978c02038d543c51ba096e990b277d171d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a21d52a8113d684885e204000002978c02038d543c51ba096e990b277d171d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a21d52a8113d684885e204000002978c02038d543c51ba096e990b277d171d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7a21d52a8113d684885e204000002978c02038d543c51ba096e990b277d171d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:29 compute-0 podman[226581]: 2025-11-29 05:25:29.039008868 +0000 UTC m=+0.167781005 container init fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 05:25:29 compute-0 podman[226581]: 2025-11-29 05:25:29.055389154 +0000 UTC m=+0.184161241 container start fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:25:29 compute-0 podman[226581]: 2025-11-29 05:25:29.059295624 +0000 UTC m=+0.188067761 container attach fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:25:29 compute-0 sudo[226754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezogkwajrasxrzgkqlfedmkcmervgrta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393929.1637816-135-156082078381221/AnsiballZ_systemd_service.py'
Nov 29 05:25:29 compute-0 sudo[226754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:29 compute-0 eager_gould[226612]: {
Nov 29 05:25:29 compute-0 eager_gould[226612]:     "0": [
Nov 29 05:25:29 compute-0 eager_gould[226612]:         {
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "devices": [
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "/dev/loop3"
Nov 29 05:25:29 compute-0 eager_gould[226612]:             ],
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_name": "ceph_lv0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_size": "21470642176",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "name": "ceph_lv0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "tags": {
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.cluster_name": "ceph",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.crush_device_class": "",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.encrypted": "0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.osd_id": "0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.type": "block",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.vdo": "0"
Nov 29 05:25:29 compute-0 eager_gould[226612]:             },
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "type": "block",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "vg_name": "ceph_vg0"
Nov 29 05:25:29 compute-0 eager_gould[226612]:         }
Nov 29 05:25:29 compute-0 eager_gould[226612]:     ],
Nov 29 05:25:29 compute-0 eager_gould[226612]:     "1": [
Nov 29 05:25:29 compute-0 eager_gould[226612]:         {
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "devices": [
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "/dev/loop4"
Nov 29 05:25:29 compute-0 eager_gould[226612]:             ],
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_name": "ceph_lv1",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_size": "21470642176",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "name": "ceph_lv1",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "tags": {
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.cluster_name": "ceph",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.crush_device_class": "",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.encrypted": "0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.osd_id": "1",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.type": "block",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.vdo": "0"
Nov 29 05:25:29 compute-0 eager_gould[226612]:             },
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "type": "block",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "vg_name": "ceph_vg1"
Nov 29 05:25:29 compute-0 eager_gould[226612]:         }
Nov 29 05:25:29 compute-0 eager_gould[226612]:     ],
Nov 29 05:25:29 compute-0 eager_gould[226612]:     "2": [
Nov 29 05:25:29 compute-0 eager_gould[226612]:         {
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "devices": [
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "/dev/loop5"
Nov 29 05:25:29 compute-0 eager_gould[226612]:             ],
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_name": "ceph_lv2",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_size": "21470642176",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "name": "ceph_lv2",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "tags": {
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.cluster_name": "ceph",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.crush_device_class": "",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.encrypted": "0",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.osd_id": "2",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.type": "block",
Nov 29 05:25:29 compute-0 eager_gould[226612]:                 "ceph.vdo": "0"
Nov 29 05:25:29 compute-0 eager_gould[226612]:             },
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "type": "block",
Nov 29 05:25:29 compute-0 eager_gould[226612]:             "vg_name": "ceph_vg2"
Nov 29 05:25:29 compute-0 eager_gould[226612]:         }
Nov 29 05:25:29 compute-0 eager_gould[226612]:     ]
Nov 29 05:25:29 compute-0 eager_gould[226612]: }
Nov 29 05:25:29 compute-0 systemd[1]: libpod-fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03.scope: Deactivated successfully.
Nov 29 05:25:29 compute-0 podman[226581]: 2025-11-29 05:25:29.858202655 +0000 UTC m=+0.986974742 container died fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 05:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7a21d52a8113d684885e204000002978c02038d543c51ba096e990b277d171d-merged.mount: Deactivated successfully.
Nov 29 05:25:29 compute-0 python3.9[226756]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:25:29 compute-0 podman[226581]: 2025-11-29 05:25:29.936957464 +0000 UTC m=+1.065729551 container remove fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 05:25:29 compute-0 systemd[1]: libpod-conmon-fb0be0089bcd754aec5a19f824b85958f6eb780db2c01179556b7bbecbbfbe03.scope: Deactivated successfully.
Nov 29 05:25:29 compute-0 sudo[226396]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:29 compute-0 systemd[1]: Reloading.
Nov 29 05:25:30 compute-0 systemd-sysv-generator[226829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:25:30 compute-0 systemd-rc-local-generator[226824]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:25:30 compute-0 sudo[226777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:25:30 compute-0 sudo[226777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:30 compute-0 sudo[226777]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:30 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 05:25:30 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 29 05:25:30 compute-0 sudo[226838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:25:30 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 05:25:30 compute-0 sudo[226838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:30 compute-0 sudo[226838]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:30 compute-0 systemd[1]: Started Open-iSCSI.
Nov 29 05:25:30 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 05:25:30 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 05:25:30 compute-0 sudo[226754]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:30 compute-0 sudo[226867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:25:30 compute-0 sudo[226867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:30 compute-0 sudo[226867]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:30 compute-0 sudo[226916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:25:30 compute-0 sudo[226916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:30 compute-0 podman[227010]: 2025-11-29 05:25:30.822069666 +0000 UTC m=+0.038194510 container create 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:25:30 compute-0 systemd[1]: Started libpod-conmon-8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08.scope.
Nov 29 05:25:30 compute-0 ceph-mon[75176]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:25:30 compute-0 podman[227010]: 2025-11-29 05:25:30.896427573 +0000 UTC m=+0.112552457 container init 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 05:25:30 compute-0 podman[227010]: 2025-11-29 05:25:30.806779674 +0000 UTC m=+0.022904538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:25:30 compute-0 podman[227010]: 2025-11-29 05:25:30.902537953 +0000 UTC m=+0.118662797 container start 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:25:30 compute-0 podman[227010]: 2025-11-29 05:25:30.905624024 +0000 UTC m=+0.121748918 container attach 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 05:25:30 compute-0 pedantic_cray[227056]: 167 167
Nov 29 05:25:30 compute-0 systemd[1]: libpod-8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08.scope: Deactivated successfully.
Nov 29 05:25:30 compute-0 podman[227010]: 2025-11-29 05:25:30.911172612 +0000 UTC m=+0.127297496 container died 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-16f1f9d32f760be6161f8a76977e729ad3d73f5761a1061061789f4ec0636388-merged.mount: Deactivated successfully.
Nov 29 05:25:30 compute-0 podman[227010]: 2025-11-29 05:25:30.940750811 +0000 UTC m=+0.156875655 container remove 8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cray, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 05:25:30 compute-0 systemd[1]: libpod-conmon-8ac064423c5eeb674e087a7070c93ffb4b5583fe98e37d59aa8a17ba027dad08.scope: Deactivated successfully.
Nov 29 05:25:31 compute-0 sudo[227148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwwklkrtraouutrvmkaobziuawofxloq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393930.7600272-146-173833076128900/AnsiballZ_service_facts.py'
Nov 29 05:25:31 compute-0 sudo[227148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:31 compute-0 podman[227153]: 2025-11-29 05:25:31.12791292 +0000 UTC m=+0.047797758 container create dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:25:31 compute-0 systemd[1]: Started libpod-conmon-dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722.scope.
Nov 29 05:25:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53ee23432acecc783a25d2aadd31c1111e5f47c295054aee4e48cf85a9999f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53ee23432acecc783a25d2aadd31c1111e5f47c295054aee4e48cf85a9999f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53ee23432acecc783a25d2aadd31c1111e5f47c295054aee4e48cf85a9999f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:31 compute-0 podman[227153]: 2025-11-29 05:25:31.107739597 +0000 UTC m=+0.027624475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:25:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53ee23432acecc783a25d2aadd31c1111e5f47c295054aee4e48cf85a9999f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:25:31 compute-0 podman[227153]: 2025-11-29 05:25:31.217172121 +0000 UTC m=+0.137056989 container init dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:25:31 compute-0 podman[227153]: 2025-11-29 05:25:31.23802737 +0000 UTC m=+0.157912238 container start dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:25:31 compute-0 podman[227153]: 2025-11-29 05:25:31.241908758 +0000 UTC m=+0.161793606 container attach dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:25:31 compute-0 python3.9[227161]: ansible-ansible.builtin.service_facts Invoked
Nov 29 05:25:31 compute-0 network[227192]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 05:25:31 compute-0 network[227193]: 'network-scripts' will be removed from distribution in near future.
Nov 29 05:25:31 compute-0 network[227194]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 05:25:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]: {
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "osd_id": 0,
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "type": "bluestore"
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:     },
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "osd_id": 1,
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "type": "bluestore"
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:     },
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "osd_id": 2,
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:         "type": "bluestore"
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]:     }
Nov 29 05:25:32 compute-0 xenodochial_wilson[227171]: }
Nov 29 05:25:32 compute-0 systemd[1]: libpod-dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722.scope: Deactivated successfully.
Nov 29 05:25:32 compute-0 systemd[1]: libpod-dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722.scope: Consumed 1.073s CPU time.
Nov 29 05:25:32 compute-0 podman[227232]: 2025-11-29 05:25:32.363180464 +0000 UTC m=+0.034835471 container died dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 05:25:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a53ee23432acecc783a25d2aadd31c1111e5f47c295054aee4e48cf85a9999f4-merged.mount: Deactivated successfully.
Nov 29 05:25:32 compute-0 podman[227232]: 2025-11-29 05:25:32.42129876 +0000 UTC m=+0.092953697 container remove dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_wilson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:25:32 compute-0 systemd[1]: libpod-conmon-dfb5f9de0060a796b9bbe71e91b203c0c449a39abf86e58604e41265a7af0722.scope: Deactivated successfully.
Nov 29 05:25:32 compute-0 sudo[226916]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:25:32 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:25:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:25:32 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:25:32 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d465fc83-1791-4aa7-89a2-e21da226cc45 does not exist
Nov 29 05:25:32 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 6b0d1509-4b22-4fca-995c-e7fa208ec254 does not exist
Nov 29 05:25:32 compute-0 sudo[227256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:25:32 compute-0 sudo[227256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:32 compute-0 sudo[227256]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:32 compute-0 sudo[227284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:25:32 compute-0 sudo[227284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:25:32 compute-0 sudo[227284]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:32 compute-0 ceph-mon[75176]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:25:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:25:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:34 compute-0 sudo[227148]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:34 compute-0 ceph-mon[75176]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:35 compute-0 sudo[227558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugyjehloesmmsjhjeoyrysrhgfcpwsmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393935.2700512-156-222897088895801/AnsiballZ_file.py'
Nov 29 05:25:35 compute-0 sudo[227558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:35 compute-0 python3.9[227560]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 05:25:35 compute-0 sudo[227558]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:36 compute-0 sudo[227710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqzdxeesvslzcrhvbisxuiuvokukjjya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393936.0957751-164-236231979955118/AnsiballZ_modprobe.py'
Nov 29 05:25:36 compute-0 sudo[227710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:36 compute-0 ceph-mon[75176]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:36 compute-0 python3.9[227712]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 05:25:36 compute-0 sudo[227710]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:37 compute-0 sudo[227879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brdsiozhybxrjrxkmwbckjtolpxnucjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393937.2381482-172-138585904918642/AnsiballZ_stat.py'
Nov 29 05:25:37 compute-0 sudo[227879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:37 compute-0 podman[227840]: 2025-11-29 05:25:37.649649025 +0000 UTC m=+0.115075725 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 05:25:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:37 compute-0 python3.9[227886]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:25:37 compute-0 sudo[227879]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:38 compute-0 sudo[228015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pstnoeibvnbygprrqvjgpdpqichafnxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393937.2381482-172-138585904918642/AnsiballZ_copy.py'
Nov 29 05:25:38 compute-0 sudo[228015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:38 compute-0 python3.9[228017]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393937.2381482-172-138585904918642/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:38 compute-0 sudo[228015]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:38 compute-0 ceph-mon[75176]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:39 compute-0 sudo[228167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjzdjgycrwsoxvvpwntvovgnvwznkvtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393938.6670792-188-242804636662126/AnsiballZ_lineinfile.py'
Nov 29 05:25:39 compute-0 sudo[228167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:39 compute-0 python3.9[228169]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:39 compute-0 sudo[228167]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:40 compute-0 sudo[228319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlvpndjrhowyozjhfswgywxamhwlgbye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393939.45094-196-220540120658998/AnsiballZ_systemd.py'
Nov 29 05:25:40 compute-0 sudo[228319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:40 compute-0 python3.9[228321]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:25:40 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 05:25:40 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 29 05:25:40 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 29 05:25:40 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 05:25:40 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 05:25:40 compute-0 sudo[228319]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:40 compute-0 ceph-mon[75176]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:41 compute-0 sudo[228475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvinukokgrfdtrhcqzortjubdugbzfyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393940.8489864-204-163327957025637/AnsiballZ_file.py'
Nov 29 05:25:41 compute-0 sudo[228475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:25:41
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'vms']
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:25:41 compute-0 python3.9[228477]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:25:41 compute-0 sudo[228475]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:25:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:41 compute-0 sudo[228627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vstrzvzycspiuwvwmcbhaklzvkzcowzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393941.684626-213-259646480377766/AnsiballZ_stat.py'
Nov 29 05:25:41 compute-0 sudo[228627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:42 compute-0 python3.9[228629]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:25:42 compute-0 sudo[228627]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:42 compute-0 sudo[228779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msgjtfqdqtxratgxwjphwcddcgeztrlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393942.392309-222-227882093053238/AnsiballZ_stat.py'
Nov 29 05:25:42 compute-0 sudo[228779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:42 compute-0 python3.9[228781]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:25:42 compute-0 sudo[228779]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:42 compute-0 ceph-mon[75176]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:43 compute-0 sudo[228931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noexygtcqebjeqhtjtytizjacmyyzdyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393943.111588-230-109934371066025/AnsiballZ_stat.py'
Nov 29 05:25:43 compute-0 sudo[228931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:43 compute-0 python3.9[228933]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:25:43 compute-0 sudo[228931]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:43 compute-0 sudo[229054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuydrphabespoqivjoaoerwlllrxohzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393943.111588-230-109934371066025/AnsiballZ_copy.py'
Nov 29 05:25:43 compute-0 sudo[229054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:44 compute-0 python3.9[229056]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393943.111588-230-109934371066025/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:44 compute-0 sudo[229054]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:44 compute-0 sudo[229206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyysyzririxlcaktgtmudootaxebcumm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393944.3980322-245-140444746038666/AnsiballZ_command.py'
Nov 29 05:25:44 compute-0 sudo[229206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:44 compute-0 ceph-mon[75176]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:45 compute-0 python3.9[229208]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:25:45 compute-0 sudo[229206]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:45 compute-0 sudo[229361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kldrktwpawrdlkymklgtxhardtoewkbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393945.322507-253-265036479700688/AnsiballZ_lineinfile.py'
Nov 29 05:25:45 compute-0 sudo[229361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:45 compute-0 python3.9[229363]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:45 compute-0 sudo[229361]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:46 compute-0 sshd-session[229293]: Received disconnect from 152.32.145.111 port 55164:11: Bye Bye [preauth]
Nov 29 05:25:46 compute-0 sshd-session[229293]: Disconnected from authenticating user root 152.32.145.111 port 55164 [preauth]
Nov 29 05:25:46 compute-0 sudo[229513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwojrjnxpiyieucasxbcfutsqmeivvii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393946.123005-261-216967805846444/AnsiballZ_replace.py'
Nov 29 05:25:46 compute-0 sudo[229513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:46 compute-0 python3.9[229515]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:46 compute-0 sudo[229513]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:46 compute-0 ceph-mon[75176]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:47 compute-0 sudo[229665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zniutqqacsxotjnxkdievpamxbhrhilz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393947.1274846-269-123803275537883/AnsiballZ_replace.py'
Nov 29 05:25:47 compute-0 sudo[229665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:47 compute-0 python3.9[229667]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:47 compute-0 sudo[229665]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:48 compute-0 sudo[229831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbuqfdzidzhfgalfjbjonfobrpeublgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393947.9195988-278-17500854391523/AnsiballZ_lineinfile.py'
Nov 29 05:25:48 compute-0 sudo[229831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:48 compute-0 podman[229791]: 2025-11-29 05:25:48.280165987 +0000 UTC m=+0.061858412 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 05:25:48 compute-0 python3.9[229836]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:48 compute-0 sudo[229831]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:48 compute-0 sudo[229986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgntlitcjyvbflqvflgqxmuwqqyphtag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393948.592879-278-217319521180119/AnsiballZ_lineinfile.py'
Nov 29 05:25:48 compute-0 sudo[229986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:48 compute-0 ceph-mon[75176]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:49 compute-0 python3.9[229988]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:49 compute-0 sudo[229986]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:49 compute-0 sudo[230138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llvjfuyzevknwhpkzlhwcgelwabazmjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393949.3027787-278-268150172202753/AnsiballZ_lineinfile.py'
Nov 29 05:25:49 compute-0 sudo[230138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:49 compute-0 python3.9[230140]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:49 compute-0 sudo[230138]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:50 compute-0 sudo[230290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmschbvommmczhzdynzqkacfbourihxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393949.9981418-278-5139719919964/AnsiballZ_lineinfile.py'
Nov 29 05:25:50 compute-0 sudo[230290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:50 compute-0 sshd[190545]: drop connection #1 from [120.48.175.69]:50884 on [38.102.83.17]:22 penalty: connections without attempting authentication
Nov 29 05:25:50 compute-0 python3.9[230292]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:50 compute-0 sudo[230290]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:50 compute-0 ceph-mon[75176]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:51 compute-0 sudo[230442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgztegrjohdxqoczvfsvbklnmloublob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393950.7945352-307-201821116966731/AnsiballZ_stat.py'
Nov 29 05:25:51 compute-0 sudo[230442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:25:51 compute-0 python3.9[230444]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:25:51 compute-0 sudo[230442]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:52 compute-0 sudo[230596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jruwvunxftywgrcfzdnfgmnmclvygcfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393951.7050602-315-256788779704989/AnsiballZ_file.py'
Nov 29 05:25:52 compute-0 sudo[230596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:52 compute-0 python3.9[230598]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:52 compute-0 sudo[230596]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:52 compute-0 ceph-mon[75176]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:53 compute-0 sudo[230748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbidhzntyavgppaynbjmkizdsjsrvsuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393952.6234891-324-165341061877522/AnsiballZ_file.py'
Nov 29 05:25:53 compute-0 sudo[230748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:53 compute-0 python3.9[230750]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:25:53 compute-0 sudo[230748]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:53 compute-0 sudo[230900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ausmccktxirnydjunoxrpfjmztnqiphs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393953.4909363-332-45735432235347/AnsiballZ_stat.py'
Nov 29 05:25:53 compute-0 sudo[230900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:54 compute-0 python3.9[230902]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:25:54 compute-0 sudo[230900]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:54 compute-0 sudo[230978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-proznueiltknovjvxzcvufyebtdbmoci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393953.4909363-332-45735432235347/AnsiballZ_file.py'
Nov 29 05:25:54 compute-0 sudo[230978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:54 compute-0 python3.9[230980]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:25:54 compute-0 sudo[230978]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.804976) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954805015, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1834, "num_deletes": 250, "total_data_size": 3088085, "memory_usage": 3121256, "flush_reason": "Manual Compaction"}
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954821004, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1737857, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11741, "largest_seqno": 13574, "table_properties": {"data_size": 1731941, "index_size": 2991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14860, "raw_average_key_size": 20, "raw_value_size": 1718827, "raw_average_value_size": 2325, "num_data_blocks": 139, "num_entries": 739, "num_filter_entries": 739, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764393744, "oldest_key_time": 1764393744, "file_creation_time": 1764393954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 16098 microseconds, and 4648 cpu microseconds.
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.821072) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1737857 bytes OK
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.821098) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.822641) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.822661) EVENT_LOG_v1 {"time_micros": 1764393954822654, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.822685) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3080349, prev total WAL file size 3080349, number of live WAL files 2.
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.823974) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323532' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1697KB)], [29(7723KB)]
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954824062, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9646234, "oldest_snapshot_seqno": -1}
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4019 keys, 7647896 bytes, temperature: kUnknown
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954883831, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7647896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7619189, "index_size": 17589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95532, "raw_average_key_size": 23, "raw_value_size": 7544852, "raw_average_value_size": 1877, "num_data_blocks": 767, "num_entries": 4019, "num_filter_entries": 4019, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764393954, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.884142) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7647896 bytes
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.885716) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.2 rd, 127.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.5 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.0) write-amplify(4.4) OK, records in: 4432, records dropped: 413 output_compression: NoCompression
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.885747) EVENT_LOG_v1 {"time_micros": 1764393954885732, "job": 12, "event": "compaction_finished", "compaction_time_micros": 59850, "compaction_time_cpu_micros": 34024, "output_level": 6, "num_output_files": 1, "total_output_size": 7647896, "num_input_records": 4432, "num_output_records": 4019, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954886513, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764393954889488, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.823867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.889576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.889581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.889584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.889585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:25:54 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:25:54.889587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:25:55 compute-0 sudo[231130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pptlrvmutmngjfidzhixnzmfnwopxgfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393954.8062887-332-211627030227347/AnsiballZ_stat.py'
Nov 29 05:25:55 compute-0 sudo[231130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:55 compute-0 ceph-mon[75176]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:55 compute-0 python3.9[231132]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:25:55 compute-0 sudo[231130]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:55 compute-0 sshd[190545]: Timeout before authentication for connection from 120.48.175.69 to 38.102.83.17, pid = 211666
Nov 29 05:25:55 compute-0 sudo[231208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdxphslvezddwusvhpvtcrxhshcozeqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393954.8062887-332-211627030227347/AnsiballZ_file.py'
Nov 29 05:25:55 compute-0 sudo[231208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:56 compute-0 python3.9[231210]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:25:56 compute-0 sudo[231208]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:56 compute-0 ceph-mon[75176]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:56 compute-0 sudo[231360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpdpofaxlobnxclxtqgtriyymffhqcyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393956.2897513-355-101514353426335/AnsiballZ_file.py'
Nov 29 05:25:56 compute-0 sudo[231360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:56 compute-0 python3.9[231362]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:56 compute-0 sudo[231360]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:57 compute-0 sudo[231512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npzqkwiqhhioqnfzfanpzpktlrghccfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393956.9901612-363-128545095764528/AnsiballZ_stat.py'
Nov 29 05:25:57 compute-0 sudo[231512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:57 compute-0 python3.9[231514]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:25:57 compute-0 sudo[231512]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:57 compute-0 sudo[231590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxfufocykvlqdgbjeyiyeybrornuywhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393956.9901612-363-128545095764528/AnsiballZ_file.py'
Nov 29 05:25:57 compute-0 sudo[231590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:58 compute-0 python3.9[231592]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:58 compute-0 sudo[231590]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:58 compute-0 sudo[231742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muuqregrztxdowwhchybdiawvmjslbcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393958.2149553-375-63134186724765/AnsiballZ_stat.py'
Nov 29 05:25:58 compute-0 sudo[231742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:58 compute-0 ceph-mon[75176]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:58 compute-0 python3.9[231744]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:25:58 compute-0 sudo[231742]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:59 compute-0 sudo[231820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjekrtwwebzoncygkwopewphvrmxmuvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393958.2149553-375-63134186724765/AnsiballZ_file.py'
Nov 29 05:25:59 compute-0 sudo[231820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:25:59 compute-0 python3.9[231822]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:25:59 compute-0 sudo[231820]: pam_unix(sudo:session): session closed for user root
Nov 29 05:25:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:25:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:25:59 compute-0 sudo[231972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oujgtosxoawuxjpyzvvjeaeiqigitula ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393959.5142868-387-140404430873222/AnsiballZ_systemd.py'
Nov 29 05:25:59 compute-0 sudo[231972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:00 compute-0 python3.9[231974]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:26:00 compute-0 systemd[1]: Reloading.
Nov 29 05:26:00 compute-0 systemd-rc-local-generator[232000]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:26:00 compute-0 systemd-sysv-generator[232004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:26:00 compute-0 sudo[231972]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:00 compute-0 ceph-mon[75176]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:01 compute-0 sudo[232160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoownwgphlmtxvkzpzpbqveyplaecphs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393960.7953415-395-208620618832908/AnsiballZ_stat.py'
Nov 29 05:26:01 compute-0 sudo[232160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:01 compute-0 python3.9[232162]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:26:01 compute-0 sudo[232160]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:01 compute-0 sudo[232238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsdcwmiaerbhbtbasovsokzhjxhzcxti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393960.7953415-395-208620618832908/AnsiballZ_file.py'
Nov 29 05:26:01 compute-0 sudo[232238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:01 compute-0 python3.9[232240]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:01 compute-0 sudo[232238]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:02 compute-0 sudo[232390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utrsphhgxsyiglgwzshuvyoswmfiazht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393962.1029687-407-27291324204737/AnsiballZ_stat.py'
Nov 29 05:26:02 compute-0 sudo[232390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:02 compute-0 python3.9[232392]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:26:02 compute-0 ceph-mon[75176]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:02 compute-0 sudo[232390]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:03 compute-0 sudo[232468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukcurojuueveurjeoqfetvpwuascpqja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393962.1029687-407-27291324204737/AnsiballZ_file.py'
Nov 29 05:26:03 compute-0 sudo[232468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:03 compute-0 python3.9[232470]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:03 compute-0 sudo[232468]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:03 compute-0 sudo[232620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxffkzhfjzghaqnrjrpsnyhgywnxyurm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393963.5629966-419-83005078258803/AnsiballZ_systemd.py'
Nov 29 05:26:03 compute-0 sudo[232620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:04 compute-0 python3.9[232622]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:26:04 compute-0 systemd[1]: Reloading.
Nov 29 05:26:04 compute-0 systemd-rc-local-generator[232640]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:26:04 compute-0 systemd-sysv-generator[232647]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:26:04 compute-0 ceph-mon[75176]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:04 compute-0 systemd[1]: Starting Create netns directory...
Nov 29 05:26:04 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 05:26:04 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 05:26:04 compute-0 systemd[1]: Finished Create netns directory.
Nov 29 05:26:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:26:04 compute-0 sudo[232620]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:05 compute-0 sudo[232813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiaribuarafcsukwkoaialpcrdcovprs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393965.3078778-429-158350222412762/AnsiballZ_file.py'
Nov 29 05:26:05 compute-0 sudo[232813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:05 compute-0 python3.9[232815]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:26:05 compute-0 sudo[232813]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:06 compute-0 sudo[232965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akufycyfgnpadzeqksrvboylarhnzjxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393966.1677082-437-33971305157331/AnsiballZ_stat.py'
Nov 29 05:26:06 compute-0 sudo[232965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:06 compute-0 ceph-mon[75176]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:06 compute-0 python3.9[232967]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:26:06 compute-0 sudo[232965]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:07 compute-0 sudo[233088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xggmtlyiqgtgpvlvdebleffwpeyixbat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393966.1677082-437-33971305157331/AnsiballZ_copy.py'
Nov 29 05:26:07 compute-0 sudo[233088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:07 compute-0 python3.9[233090]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764393966.1677082-437-33971305157331/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:26:07 compute-0 sudo[233088]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:08 compute-0 podman[233115]: 2025-11-29 05:26:08.101707238 +0000 UTC m=+0.154229494 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 05:26:08 compute-0 sudo[233266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfkrqotlolyzzjonuxszwqgrxcjfmurv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393968.0009856-454-270693852806660/AnsiballZ_file.py'
Nov 29 05:26:08 compute-0 sudo[233266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:08 compute-0 python3.9[233268]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:26:08 compute-0 sudo[233266]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:08 compute-0 ceph-mon[75176]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:09 compute-0 sudo[233418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgldrxqanzirlbrgmgfzjjvchshwemlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393968.8243692-462-43787007799434/AnsiballZ_stat.py'
Nov 29 05:26:09 compute-0 sudo[233418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:09 compute-0 python3.9[233420]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:26:09 compute-0 sudo[233418]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:09 compute-0 sshd[190545]: drop connection #0 from [120.48.175.69]:55534 on [38.102.83.17]:22 penalty: exceeded LoginGraceTime
Nov 29 05:26:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:26:09 compute-0 sudo[233541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrgjztuffprmtcmizcfowsmekthrdzfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393968.8243692-462-43787007799434/AnsiballZ_copy.py'
Nov 29 05:26:09 compute-0 sudo[233541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:10 compute-0 python3.9[233543]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393968.8243692-462-43787007799434/.source.json _original_basename=.1ghertuh follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:10 compute-0 sudo[233541]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:10 compute-0 sudo[233693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uilawpjruubzixvqxwhphcybfkbjoqzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393970.3422344-477-192468729808250/AnsiballZ_file.py'
Nov 29 05:26:10 compute-0 sudo[233693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:10 compute-0 ceph-mon[75176]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:10 compute-0 python3.9[233695]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:10 compute-0 sudo[233693]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:26:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:26:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:26:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:26:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:26:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:26:11 compute-0 sudo[233845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skzzfeoczydlssccrhvkfenuxftrjfjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393971.190996-485-165095444121190/AnsiballZ_stat.py'
Nov 29 05:26:11 compute-0 sudo[233845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:11 compute-0 sudo[233845]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:12 compute-0 sudo[233968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wylzglzwcnbwrcnuflmklfuykkclfkxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393971.190996-485-165095444121190/AnsiballZ_copy.py'
Nov 29 05:26:12 compute-0 sudo[233968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:12 compute-0 sudo[233968]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:12 compute-0 ceph-mon[75176]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:13 compute-0 sudo[234120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdbwtobaoqrynqidqktuzosxhpwdkfhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393972.7210937-502-264920182121601/AnsiballZ_container_config_data.py'
Nov 29 05:26:13 compute-0 sudo[234120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:13 compute-0 python3.9[234122]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 05:26:13 compute-0 sudo[234120]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:26:13.736 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:26:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:26:13.737 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:26:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:26:13.737 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:26:14 compute-0 sudo[234272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hymtnetquvdcorksfddeanvjsckadezx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393973.7292883-511-93731476999430/AnsiballZ_container_config_hash.py'
Nov 29 05:26:14 compute-0 sudo[234272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:14 compute-0 python3.9[234274]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 05:26:14 compute-0 sudo[234272]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:14 compute-0 ceph-mon[75176]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:26:15 compute-0 sudo[234424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcxajhmuurrqnugtibagoydltrjbbnnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393974.7703073-520-161368043987719/AnsiballZ_podman_container_info.py'
Nov 29 05:26:15 compute-0 sudo[234424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:15 compute-0 python3.9[234426]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 05:26:15 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 05:26:15 compute-0 sudo[234424]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:16 compute-0 ceph-mon[75176]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:16 compute-0 sudo[234603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkvqomvnntnjqqslyjjrkxovznvykvyc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764393976.37693-533-42116350392180/AnsiballZ_edpm_container_manage.py'
Nov 29 05:26:16 compute-0 sudo[234603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:16 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 05:26:17 compute-0 python3[234605]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 05:26:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:18 compute-0 sshd-session[234614]: Invalid user builder from 45.120.216.232 port 45418
Nov 29 05:26:18 compute-0 podman[234622]: 2025-11-29 05:26:18.539586826 +0000 UTC m=+1.319150862 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 05:26:18 compute-0 podman[234656]: 2025-11-29 05:26:18.546548746 +0000 UTC m=+0.166899165 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 05:26:18 compute-0 sshd-session[234614]: Received disconnect from 45.120.216.232 port 45418:11: Bye Bye [preauth]
Nov 29 05:26:18 compute-0 sshd-session[234614]: Disconnected from invalid user builder 45.120.216.232 port 45418 [preauth]
Nov 29 05:26:18 compute-0 podman[234699]: 2025-11-29 05:26:18.738483124 +0000 UTC m=+0.077902769 container create 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:26:18 compute-0 podman[234699]: 2025-11-29 05:26:18.702911508 +0000 UTC m=+0.042331203 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 05:26:18 compute-0 python3[234605]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 05:26:18 compute-0 ceph-mon[75176]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:18 compute-0 sudo[234603]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:19 compute-0 sudo[234888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atcjxwrgdhfyzspgpqtgbvsjosxtlchn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393979.1418662-541-5503554811935/AnsiballZ_stat.py'
Nov 29 05:26:19 compute-0 sudo[234888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:19 compute-0 python3.9[234890]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:26:19 compute-0 sudo[234888]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:26:20 compute-0 sudo[235042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiitngogqzxeccwivklghedpynpzlnsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393980.0964308-550-138788034130013/AnsiballZ_file.py'
Nov 29 05:26:20 compute-0 sudo[235042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:20 compute-0 python3.9[235044]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:20 compute-0 sudo[235042]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:20 compute-0 ceph-mon[75176]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:20 compute-0 sudo[235118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etgrqpzmwoxmgqagpznuchyltrvoiogu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393980.0964308-550-138788034130013/AnsiballZ_stat.py'
Nov 29 05:26:20 compute-0 sudo[235118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:21 compute-0 python3.9[235120]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:26:21 compute-0 sudo[235118]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:21 compute-0 sudo[235269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zngjuqgzsgkseayrgescdetwndsvxeok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393981.2333207-550-266962473204191/AnsiballZ_copy.py'
Nov 29 05:26:21 compute-0 sudo[235269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:22 compute-0 python3.9[235271]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764393981.2333207-550-266962473204191/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:22 compute-0 sudo[235269]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:22 compute-0 sudo[235345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxedvjlcylgwbttwcsbxtiuhpdkvyoal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393981.2333207-550-266962473204191/AnsiballZ_systemd.py'
Nov 29 05:26:22 compute-0 sudo[235345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:22 compute-0 python3.9[235347]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 05:26:22 compute-0 systemd[1]: Reloading.
Nov 29 05:26:22 compute-0 systemd-sysv-generator[235377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:26:22 compute-0 systemd-rc-local-generator[235370]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:26:22 compute-0 ceph-mon[75176]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:23 compute-0 sudo[235345]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:23 compute-0 sudo[235456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybxufhrqgxvlffxizplcaygriopmonvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393981.2333207-550-266962473204191/AnsiballZ_systemd.py'
Nov 29 05:26:23 compute-0 sudo[235456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:23 compute-0 python3.9[235458]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:26:24 compute-0 systemd[1]: Reloading.
Nov 29 05:26:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:26:24 compute-0 ceph-mon[75176]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:24 compute-0 systemd-sysv-generator[235487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:26:24 compute-0 systemd-rc-local-generator[235480]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:26:25 compute-0 systemd[1]: Starting multipathd container...
Nov 29 05:26:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f491a639b723a357c5c1de3df2c2b97ac1b7a76a35af6d21e14ef8bb38ba88/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f491a639b723a357c5c1de3df2c2b97ac1b7a76a35af6d21e14ef8bb38ba88/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:25 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c.
Nov 29 05:26:25 compute-0 podman[235498]: 2025-11-29 05:26:25.335603741 +0000 UTC m=+0.173763033 container init 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:26:25 compute-0 multipathd[235513]: + sudo -E kolla_set_configs
Nov 29 05:26:25 compute-0 podman[235498]: 2025-11-29 05:26:25.365364934 +0000 UTC m=+0.203524196 container start 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 29 05:26:25 compute-0 podman[235498]: multipathd
Nov 29 05:26:25 compute-0 systemd[1]: Started multipathd container.
Nov 29 05:26:25 compute-0 sudo[235520]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 29 05:26:25 compute-0 sudo[235520]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 05:26:25 compute-0 sudo[235520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 05:26:25 compute-0 multipathd[235513]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 05:26:25 compute-0 multipathd[235513]: INFO:__main__:Validating config file
Nov 29 05:26:25 compute-0 multipathd[235513]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 05:26:25 compute-0 multipathd[235513]: INFO:__main__:Writing out command to execute
Nov 29 05:26:25 compute-0 sudo[235520]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:25 compute-0 sudo[235456]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:25 compute-0 multipathd[235513]: ++ cat /run_command
Nov 29 05:26:25 compute-0 multipathd[235513]: + CMD='/usr/sbin/multipathd -d'
Nov 29 05:26:25 compute-0 multipathd[235513]: + ARGS=
Nov 29 05:26:25 compute-0 multipathd[235513]: + sudo kolla_copy_cacerts
Nov 29 05:26:25 compute-0 sudo[235539]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 29 05:26:25 compute-0 sudo[235539]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 05:26:25 compute-0 sudo[235539]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 05:26:25 compute-0 sudo[235539]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:25 compute-0 multipathd[235513]: + [[ ! -n '' ]]
Nov 29 05:26:25 compute-0 multipathd[235513]: + . kolla_extend_start
Nov 29 05:26:25 compute-0 multipathd[235513]: Running command: '/usr/sbin/multipathd -d'
Nov 29 05:26:25 compute-0 multipathd[235513]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 05:26:25 compute-0 multipathd[235513]: + umask 0022
Nov 29 05:26:25 compute-0 multipathd[235513]: + exec /usr/sbin/multipathd -d
Nov 29 05:26:25 compute-0 podman[235519]: 2025-11-29 05:26:25.462928315 +0000 UTC m=+0.076802315 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:26:25 compute-0 systemd[1]: 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-44f10734ebac7d5c.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 05:26:25 compute-0 systemd[1]: 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-44f10734ebac7d5c.service: Failed with result 'exit-code'.
Nov 29 05:26:25 compute-0 multipathd[235513]: 3098.125926 | --------start up--------
Nov 29 05:26:25 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:26:25 compute-0 multipathd[235513]: 3098.125942 | read /etc/multipath.conf
Nov 29 05:26:25 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:26:25 compute-0 multipathd[235513]: 3098.133128 | path checkers start up
Nov 29 05:26:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:26 compute-0 python3.9[235702]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:26:26 compute-0 ceph-mon[75176]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:26 compute-0 sudo[235854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxxskahbvyjlnqvahjqfelndpfqzhqjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393986.469731-586-235421515633105/AnsiballZ_command.py'
Nov 29 05:26:26 compute-0 sudo[235854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:27 compute-0 python3.9[235856]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:26:27 compute-0 sudo[235854]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:27 compute-0 sudo[236019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phkpjlsidolblilzyzqkpwnhswnofxzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393987.3247683-594-870908407287/AnsiballZ_systemd.py'
Nov 29 05:26:27 compute-0 sudo[236019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:28 compute-0 python3.9[236021]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:26:28 compute-0 systemd[1]: Stopping multipathd container...
Nov 29 05:26:28 compute-0 multipathd[235513]: 3100.841460 | exit (signal)
Nov 29 05:26:28 compute-0 multipathd[235513]: 3100.841569 | --------shut down-------
Nov 29 05:26:28 compute-0 systemd[1]: libpod-48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c.scope: Deactivated successfully.
Nov 29 05:26:28 compute-0 podman[236025]: 2025-11-29 05:26:28.209671856 +0000 UTC m=+0.074040283 container died 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 05:26:28 compute-0 systemd[1]: 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-44f10734ebac7d5c.timer: Deactivated successfully.
Nov 29 05:26:28 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c.
Nov 29 05:26:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-userdata-shm.mount: Deactivated successfully.
Nov 29 05:26:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f491a639b723a357c5c1de3df2c2b97ac1b7a76a35af6d21e14ef8bb38ba88-merged.mount: Deactivated successfully.
Nov 29 05:26:28 compute-0 podman[236025]: 2025-11-29 05:26:28.48155623 +0000 UTC m=+0.345924657 container cleanup 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 05:26:28 compute-0 podman[236025]: multipathd
Nov 29 05:26:28 compute-0 podman[236052]: multipathd
Nov 29 05:26:28 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 05:26:28 compute-0 systemd[1]: Stopped multipathd container.
Nov 29 05:26:28 compute-0 systemd[1]: Starting multipathd container...
Nov 29 05:26:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f491a639b723a357c5c1de3df2c2b97ac1b7a76a35af6d21e14ef8bb38ba88/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f491a639b723a357c5c1de3df2c2b97ac1b7a76a35af6d21e14ef8bb38ba88/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:28 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c.
Nov 29 05:26:28 compute-0 podman[236065]: 2025-11-29 05:26:28.724208783 +0000 UTC m=+0.143525213 container init 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 05:26:28 compute-0 multipathd[236080]: + sudo -E kolla_set_configs
Nov 29 05:26:28 compute-0 sudo[236086]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 29 05:26:28 compute-0 sudo[236086]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 05:26:28 compute-0 sudo[236086]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 05:26:28 compute-0 podman[236065]: 2025-11-29 05:26:28.761088849 +0000 UTC m=+0.180405239 container start 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 05:26:28 compute-0 podman[236065]: multipathd
Nov 29 05:26:28 compute-0 systemd[1]: Started multipathd container.
Nov 29 05:26:28 compute-0 sudo[236019]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:28 compute-0 multipathd[236080]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 05:26:28 compute-0 multipathd[236080]: INFO:__main__:Validating config file
Nov 29 05:26:28 compute-0 multipathd[236080]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 05:26:28 compute-0 multipathd[236080]: INFO:__main__:Writing out command to execute
Nov 29 05:26:28 compute-0 sudo[236086]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:28 compute-0 multipathd[236080]: ++ cat /run_command
Nov 29 05:26:28 compute-0 multipathd[236080]: + CMD='/usr/sbin/multipathd -d'
Nov 29 05:26:28 compute-0 multipathd[236080]: + ARGS=
Nov 29 05:26:28 compute-0 multipathd[236080]: + sudo kolla_copy_cacerts
Nov 29 05:26:28 compute-0 sudo[236108]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 29 05:26:28 compute-0 sudo[236108]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 29 05:26:28 compute-0 sudo[236108]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 29 05:26:28 compute-0 sudo[236108]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:28 compute-0 ceph-mon[75176]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:28 compute-0 multipathd[236080]: + [[ ! -n '' ]]
Nov 29 05:26:28 compute-0 multipathd[236080]: + . kolla_extend_start
Nov 29 05:26:28 compute-0 multipathd[236080]: Running command: '/usr/sbin/multipathd -d'
Nov 29 05:26:28 compute-0 multipathd[236080]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 05:26:28 compute-0 multipathd[236080]: + umask 0022
Nov 29 05:26:28 compute-0 multipathd[236080]: + exec /usr/sbin/multipathd -d
Nov 29 05:26:28 compute-0 podman[236087]: 2025-11-29 05:26:28.868337309 +0000 UTC m=+0.086765902 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 05:26:28 compute-0 systemd[1]: 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-2455cf12635f5daf.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 05:26:28 compute-0 systemd[1]: 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c-2455cf12635f5daf.service: Failed with result 'exit-code'.
Nov 29 05:26:28 compute-0 multipathd[236080]: 3101.537524 | --------start up--------
Nov 29 05:26:28 compute-0 multipathd[236080]: 3101.537542 | read /etc/multipath.conf
Nov 29 05:26:28 compute-0 multipathd[236080]: 3101.544582 | path checkers start up
Nov 29 05:26:28 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 05:26:28 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 05:26:29 compute-0 sudo[236272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uclcyxcqcmuygtajljoxysvyriwmntek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393989.0320385-602-280438970124601/AnsiballZ_file.py'
Nov 29 05:26:29 compute-0 sudo[236272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:29 compute-0 python3.9[236274]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:29 compute-0 sudo[236272]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:26:30 compute-0 sudo[236424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umhoduzqxtohhndjiqwdqhfvgpgwcooa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393990.0660357-614-232388154758208/AnsiballZ_file.py'
Nov 29 05:26:30 compute-0 sudo[236424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:30 compute-0 python3.9[236426]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 05:26:30 compute-0 sudo[236424]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:30 compute-0 ceph-mon[75176]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:31 compute-0 sshd[190545]: drop connection #0 from [120.48.175.69]:58930 on [38.102.83.17]:22 penalty: exceeded LoginGraceTime
Nov 29 05:26:31 compute-0 sudo[236576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yghlyuuplbokbjbdepnttgjwysqmzbku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393990.8291495-622-136079975387914/AnsiballZ_modprobe.py'
Nov 29 05:26:31 compute-0 sudo[236576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:31 compute-0 python3.9[236578]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 05:26:31 compute-0 kernel: Key type psk registered
Nov 29 05:26:31 compute-0 sudo[236576]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:32 compute-0 sudo[236737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lytorugbpusurfjkibrvsiurcupwbcrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393991.6954017-630-219166139847673/AnsiballZ_stat.py'
Nov 29 05:26:32 compute-0 sudo[236737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:32 compute-0 python3.9[236739]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:26:32 compute-0 sudo[236737]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:32 compute-0 sudo[236881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgblhdtbgggejlsthpnzbkrhpsknbpau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393991.6954017-630-219166139847673/AnsiballZ_copy.py'
Nov 29 05:26:32 compute-0 sudo[236881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:32 compute-0 sudo[236842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:26:32 compute-0 sudo[236842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:32 compute-0 sudo[236842]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:32 compute-0 sudo[236888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:26:32 compute-0 sudo[236888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:32 compute-0 sudo[236888]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:32 compute-0 sudo[236913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:26:32 compute-0 sudo[236913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:32 compute-0 sudo[236913]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:32 compute-0 ceph-mon[75176]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:32 compute-0 python3.9[236885]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764393991.6954017-630-219166139847673/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:32 compute-0 sudo[236881]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:32 compute-0 sudo[236938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:26:32 compute-0 sudo[236938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:33 compute-0 sudo[236938]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:33 compute-0 sudo[237143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcsiqtgfwshkuwuvvtyguqxftjaxqtfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393993.141416-646-10880549703951/AnsiballZ_lineinfile.py'
Nov 29 05:26:33 compute-0 sudo[237143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:26:33 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:26:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:26:33 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:26:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:26:33 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:26:33 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 291e8600-3135-47de-ac76-2f4364410b03 does not exist
Nov 29 05:26:33 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev abb25670-3898-4c4e-a885-0da21437ec03 does not exist
Nov 29 05:26:33 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev b125312b-0666-45b7-8a58-733e8df32a2b does not exist
Nov 29 05:26:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:26:33 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:26:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:26:33 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:26:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:26:33 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:26:33 compute-0 sudo[237146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:26:33 compute-0 sudo[237146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:33 compute-0 sudo[237146]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:33 compute-0 python3.9[237145]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:33 compute-0 sudo[237143]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:33 compute-0 sudo[237171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:26:33 compute-0 sudo[237171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:33 compute-0 sudo[237171]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:33 compute-0 sudo[237196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:26:33 compute-0 sudo[237196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:33 compute-0 sudo[237196]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:33 compute-0 sudo[237245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:26:33 compute-0 sudo[237245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:26:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:26:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:26:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:26:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:26:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:26:34 compute-0 podman[237384]: 2025-11-29 05:26:34.270357686 +0000 UTC m=+0.051509214 container create ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:26:34 compute-0 systemd[1]: Started libpod-conmon-ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794.scope.
Nov 29 05:26:34 compute-0 podman[237384]: 2025-11-29 05:26:34.24953515 +0000 UTC m=+0.030686708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:26:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:26:34 compute-0 podman[237384]: 2025-11-29 05:26:34.361109404 +0000 UTC m=+0.142260962 container init ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:26:34 compute-0 sudo[237454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjygebbjwcsvxqgrwjngqyxcxdnhkafd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393993.9690344-654-65138324627961/AnsiballZ_systemd.py'
Nov 29 05:26:34 compute-0 sudo[237454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:34 compute-0 podman[237384]: 2025-11-29 05:26:34.375242728 +0000 UTC m=+0.156394286 container start ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 29 05:26:34 compute-0 podman[237384]: 2025-11-29 05:26:34.380458765 +0000 UTC m=+0.161610323 container attach ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:26:34 compute-0 goofy_carson[237437]: 167 167
Nov 29 05:26:34 compute-0 systemd[1]: libpod-ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794.scope: Deactivated successfully.
Nov 29 05:26:34 compute-0 conmon[237437]: conmon ac970f02bf711da42730 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794.scope/container/memory.events
Nov 29 05:26:34 compute-0 podman[237384]: 2025-11-29 05:26:34.383841807 +0000 UTC m=+0.164993355 container died ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 05:26:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-834e37d757161bbdb7685c0a9f939bff050cc45269cfbacac8d6d2dbe0cd62f2-merged.mount: Deactivated successfully.
Nov 29 05:26:34 compute-0 podman[237384]: 2025-11-29 05:26:34.422337324 +0000 UTC m=+0.203488842 container remove ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:26:34 compute-0 systemd[1]: libpod-conmon-ac970f02bf711da427309ed16483cc73fa6aa65a264934b1c6407c602c8d6794.scope: Deactivated successfully.
Nov 29 05:26:34 compute-0 podman[237478]: 2025-11-29 05:26:34.618800933 +0000 UTC m=+0.057256424 container create d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:26:34 compute-0 systemd[1]: Started libpod-conmon-d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb.scope.
Nov 29 05:26:34 compute-0 podman[237478]: 2025-11-29 05:26:34.590569007 +0000 UTC m=+0.029024528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:26:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:34 compute-0 python3.9[237457]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:26:34 compute-0 podman[237478]: 2025-11-29 05:26:34.737179524 +0000 UTC m=+0.175635045 container init d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 05:26:34 compute-0 podman[237478]: 2025-11-29 05:26:34.745442515 +0000 UTC m=+0.183898046 container start d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:26:34 compute-0 podman[237478]: 2025-11-29 05:26:34.750339603 +0000 UTC m=+0.188795134 container attach d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:26:34 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 05:26:34 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 29 05:26:34 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 29 05:26:34 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 29 05:26:34 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 29 05:26:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:26:34 compute-0 sudo[237454]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:34 compute-0 ceph-mon[75176]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:35 compute-0 sudo[237658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofakpdbhcsahiykqegndgstncquyaplk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764393995.0831382-662-105261777855724/AnsiballZ_dnf.py'
Nov 29 05:26:35 compute-0 sudo[237658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:35 compute-0 python3.9[237661]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 05:26:35 compute-0 jolly_kare[237495]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:26:35 compute-0 jolly_kare[237495]: --> relative data size: 1.0
Nov 29 05:26:35 compute-0 jolly_kare[237495]: --> All data devices are unavailable
Nov 29 05:26:35 compute-0 systemd[1]: libpod-d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb.scope: Deactivated successfully.
Nov 29 05:26:35 compute-0 systemd[1]: libpod-d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb.scope: Consumed 1.055s CPU time.
Nov 29 05:26:35 compute-0 podman[237478]: 2025-11-29 05:26:35.889040485 +0000 UTC m=+1.327496006 container died d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-68d3def3471022725befd84f1180d99fabedc13baccda155b2d2c4b0b8bac7bc-merged.mount: Deactivated successfully.
Nov 29 05:26:35 compute-0 podman[237478]: 2025-11-29 05:26:35.957197633 +0000 UTC m=+1.395653114 container remove d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 05:26:35 compute-0 systemd[1]: libpod-conmon-d2ce958a2e64cef0bf379dd41256d2f590375614c527b29da4e257db3a8d54fb.scope: Deactivated successfully.
Nov 29 05:26:35 compute-0 sudo[237245]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:36 compute-0 sudo[237695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:26:36 compute-0 sudo[237695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:36 compute-0 sudo[237695]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:36 compute-0 sudo[237720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:26:36 compute-0 sudo[237720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:36 compute-0 sudo[237720]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:36 compute-0 sudo[237745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:26:36 compute-0 sudo[237745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:36 compute-0 sudo[237745]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:36 compute-0 sudo[237770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:26:36 compute-0 sudo[237770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:36 compute-0 podman[237836]: 2025-11-29 05:26:36.589802603 +0000 UTC m=+0.065894434 container create 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 05:26:36 compute-0 systemd[1]: Started libpod-conmon-5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0.scope.
Nov 29 05:26:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:26:36 compute-0 podman[237836]: 2025-11-29 05:26:36.560455329 +0000 UTC m=+0.036547210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:26:36 compute-0 podman[237836]: 2025-11-29 05:26:36.666798836 +0000 UTC m=+0.142890667 container init 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:26:36 compute-0 podman[237836]: 2025-11-29 05:26:36.674613916 +0000 UTC m=+0.150705717 container start 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:26:36 compute-0 podman[237836]: 2025-11-29 05:26:36.677486646 +0000 UTC m=+0.153578497 container attach 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:26:36 compute-0 hopeful_austin[237853]: 167 167
Nov 29 05:26:36 compute-0 systemd[1]: libpod-5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0.scope: Deactivated successfully.
Nov 29 05:26:36 compute-0 podman[237836]: 2025-11-29 05:26:36.681897994 +0000 UTC m=+0.157989795 container died 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:26:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-342d5e87cd5202bff57454d408f2078ce490b107c84f935e125d8c9d60d3ada6-merged.mount: Deactivated successfully.
Nov 29 05:26:36 compute-0 podman[237836]: 2025-11-29 05:26:36.717470469 +0000 UTC m=+0.193562270 container remove 5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:26:36 compute-0 systemd[1]: libpod-conmon-5808fd15729d51088e36e8854458aa732572ce57e74f0b7a57a09da0a7ee98b0.scope: Deactivated successfully.
Nov 29 05:26:36 compute-0 ceph-mon[75176]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:36 compute-0 podman[237876]: 2025-11-29 05:26:36.959436285 +0000 UTC m=+0.070935747 container create 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:26:37 compute-0 systemd[1]: Started libpod-conmon-91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059.scope.
Nov 29 05:26:37 compute-0 podman[237876]: 2025-11-29 05:26:36.927182871 +0000 UTC m=+0.038682343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:26:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15dd8c15f8f979bbf06965b085a5a98ad44d7ce486a55f05d60568c998277d45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15dd8c15f8f979bbf06965b085a5a98ad44d7ce486a55f05d60568c998277d45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15dd8c15f8f979bbf06965b085a5a98ad44d7ce486a55f05d60568c998277d45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15dd8c15f8f979bbf06965b085a5a98ad44d7ce486a55f05d60568c998277d45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:37 compute-0 podman[237876]: 2025-11-29 05:26:37.075054878 +0000 UTC m=+0.186554350 container init 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 05:26:37 compute-0 podman[237876]: 2025-11-29 05:26:37.096361587 +0000 UTC m=+0.207861019 container start 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 05:26:37 compute-0 podman[237876]: 2025-11-29 05:26:37.1002489 +0000 UTC m=+0.211748362 container attach 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:26:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]: {
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:     "0": [
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:         {
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "devices": [
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "/dev/loop3"
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             ],
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_name": "ceph_lv0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_size": "21470642176",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "name": "ceph_lv0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "tags": {
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.cluster_name": "ceph",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.crush_device_class": "",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.encrypted": "0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.osd_id": "0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.type": "block",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.vdo": "0"
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             },
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "type": "block",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "vg_name": "ceph_vg0"
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:         }
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:     ],
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:     "1": [
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:         {
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "devices": [
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "/dev/loop4"
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             ],
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_name": "ceph_lv1",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_size": "21470642176",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "name": "ceph_lv1",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "tags": {
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.cluster_name": "ceph",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.crush_device_class": "",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.encrypted": "0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.osd_id": "1",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.type": "block",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.vdo": "0"
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             },
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "type": "block",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "vg_name": "ceph_vg1"
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:         }
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:     ],
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:     "2": [
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:         {
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "devices": [
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "/dev/loop5"
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             ],
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_name": "ceph_lv2",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_size": "21470642176",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "name": "ceph_lv2",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "tags": {
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.cluster_name": "ceph",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.crush_device_class": "",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.encrypted": "0",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.osd_id": "2",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.type": "block",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:                 "ceph.vdo": "0"
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             },
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "type": "block",
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:             "vg_name": "ceph_vg2"
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:         }
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]:     ]
Nov 29 05:26:37 compute-0 wonderful_vaughan[237893]: }
Nov 29 05:26:37 compute-0 systemd[1]: libpod-91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059.scope: Deactivated successfully.
Nov 29 05:26:37 compute-0 podman[237876]: 2025-11-29 05:26:37.814810195 +0000 UTC m=+0.926309617 container died 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 05:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-15dd8c15f8f979bbf06965b085a5a98ad44d7ce486a55f05d60568c998277d45-merged.mount: Deactivated successfully.
Nov 29 05:26:37 compute-0 podman[237876]: 2025-11-29 05:26:37.872876657 +0000 UTC m=+0.984376079 container remove 91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:26:37 compute-0 systemd[1]: libpod-conmon-91ad1820c979e6767fac52f5471ec42f707c75efaed81b19a60f24fe1d393059.scope: Deactivated successfully.
Nov 29 05:26:37 compute-0 sudo[237770]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:37 compute-0 sudo[237918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:26:37 compute-0 sudo[237918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:37 compute-0 sudo[237918]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:38 compute-0 sudo[237943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:26:38 compute-0 sudo[237943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:38 compute-0 sudo[237943]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:38 compute-0 sudo[237968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:26:38 compute-0 sudo[237968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:38 compute-0 sudo[237968]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:38 compute-0 systemd[1]: Reloading.
Nov 29 05:26:38 compute-0 systemd-sysv-generator[238066]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:26:38 compute-0 systemd-rc-local-generator[238062]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:26:38 compute-0 podman[237993]: 2025-11-29 05:26:38.317309889 +0000 UTC m=+0.154597892 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:26:38 compute-0 systemd[1]: Reloading.
Nov 29 05:26:38 compute-0 systemd-rc-local-generator[238103]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:26:38 compute-0 systemd-sysv-generator[238109]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:26:38 compute-0 sudo[237996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:26:38 compute-0 sudo[237996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:38 compute-0 ceph-mon[75176]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:38 compute-0 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 05:26:39 compute-0 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 05:26:39 compute-0 lvm[238152]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 05:26:39 compute-0 lvm[238152]: VG ceph_vg0 finished
Nov 29 05:26:39 compute-0 lvm[238153]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 05:26:39 compute-0 lvm[238153]: VG ceph_vg1 finished
Nov 29 05:26:39 compute-0 lvm[238154]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 05:26:39 compute-0 lvm[238154]: VG ceph_vg2 finished
Nov 29 05:26:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 05:26:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 29 05:26:39 compute-0 systemd[1]: Reloading.
Nov 29 05:26:39 compute-0 podman[238216]: 2025-11-29 05:26:39.280490671 +0000 UTC m=+0.039985354 container create 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:26:39 compute-0 podman[238216]: 2025-11-29 05:26:39.261531439 +0000 UTC m=+0.021026172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:26:39 compute-0 systemd-sysv-generator[238264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:26:39 compute-0 systemd-rc-local-generator[238261]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:26:39 compute-0 systemd[1]: Started libpod-conmon-4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc.scope.
Nov 29 05:26:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 05:26:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:26:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:39 compute-0 podman[238216]: 2025-11-29 05:26:39.683857154 +0000 UTC m=+0.443351847 container init 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 05:26:39 compute-0 podman[238216]: 2025-11-29 05:26:39.691483989 +0000 UTC m=+0.450978682 container start 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 05:26:39 compute-0 podman[238216]: 2025-11-29 05:26:39.694763938 +0000 UTC m=+0.454258641 container attach 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:26:39 compute-0 adoring_panini[238459]: 167 167
Nov 29 05:26:39 compute-0 systemd[1]: libpod-4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc.scope: Deactivated successfully.
Nov 29 05:26:39 compute-0 podman[238216]: 2025-11-29 05:26:39.699129004 +0000 UTC m=+0.458623707 container died 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-24725f370454177cb46c91d79f97e8cd7a6d0deb245ff8c2aa64a3cdbb1618bb-merged.mount: Deactivated successfully.
Nov 29 05:26:39 compute-0 podman[238216]: 2025-11-29 05:26:39.738061665 +0000 UTC m=+0.497556348 container remove 4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:26:39 compute-0 systemd[1]: libpod-conmon-4cde8398aec709811e6ddffe1b1d9f3ba39c050403c94ab8d63ff2622cf0b3bc.scope: Deactivated successfully.
Nov 29 05:26:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:26:39 compute-0 podman[238713]: 2025-11-29 05:26:39.922531385 +0000 UTC m=+0.058048654 container create 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:26:39 compute-0 systemd[1]: Started libpod-conmon-1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54.scope.
Nov 29 05:26:39 compute-0 podman[238713]: 2025-11-29 05:26:39.896153877 +0000 UTC m=+0.031671236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:26:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f705f58145926cb8892125eed058e2201166d6a83d34a3359e2e1ae5c76055/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f705f58145926cb8892125eed058e2201166d6a83d34a3359e2e1ae5c76055/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f705f58145926cb8892125eed058e2201166d6a83d34a3359e2e1ae5c76055/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51f705f58145926cb8892125eed058e2201166d6a83d34a3359e2e1ae5c76055/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:26:40 compute-0 podman[238713]: 2025-11-29 05:26:40.030217228 +0000 UTC m=+0.165734497 container init 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:26:40 compute-0 podman[238713]: 2025-11-29 05:26:40.037391701 +0000 UTC m=+0.172908970 container start 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:26:40 compute-0 podman[238713]: 2025-11-29 05:26:40.045559619 +0000 UTC m=+0.181076888 container attach 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:26:40 compute-0 sudo[237658]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:40 compute-0 sudo[239591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxqlrmuhlmfjscpcbcqmwdwhrnugqtoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394000.2973924-670-216382152908304/AnsiballZ_systemd_service.py'
Nov 29 05:26:40 compute-0 sudo[239591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 05:26:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 29 05:26:40 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.629s CPU time.
Nov 29 05:26:40 compute-0 systemd[1]: run-rd2df37efd9a94adb825097f8ce549af6.service: Deactivated successfully.
Nov 29 05:26:40 compute-0 ceph-mon[75176]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:40 compute-0 python3.9[239594]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]: {
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "osd_id": 0,
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "type": "bluestore"
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:     },
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "osd_id": 1,
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "type": "bluestore"
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:     },
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "osd_id": 2,
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:         "type": "bluestore"
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]:     }
Nov 29 05:26:40 compute-0 gracious_hodgkin[238834]: }
Nov 29 05:26:40 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 29 05:26:40 compute-0 iscsid[226839]: iscsid shutting down.
Nov 29 05:26:40 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 05:26:40 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 29 05:26:40 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 05:26:40 compute-0 podman[238713]: 2025-11-29 05:26:40.981964626 +0000 UTC m=+1.117481895 container died 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:26:40 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 29 05:26:40 compute-0 systemd[1]: libpod-1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54.scope: Deactivated successfully.
Nov 29 05:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-51f705f58145926cb8892125eed058e2201166d6a83d34a3359e2e1ae5c76055-merged.mount: Deactivated successfully.
Nov 29 05:26:41 compute-0 systemd[1]: Started Open-iSCSI.
Nov 29 05:26:41 compute-0 podman[238713]: 2025-11-29 05:26:41.034423404 +0000 UTC m=+1.169940673 container remove 1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:26:41 compute-0 systemd[1]: libpod-conmon-1f94f128f1283585b8cf5276f5ab7c34bbedb7573983332fe28901e1b391cf54.scope: Deactivated successfully.
Nov 29 05:26:41 compute-0 sudo[239591]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:41 compute-0 sudo[237996]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:26:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:26:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:26:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 379ce5a1-ca6a-476f-9714-0afde0dd280e does not exist
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d05f0423-3ca4-4652-a67c-137a9896c30c does not exist
Nov 29 05:26:41 compute-0 sudo[239641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:26:41 compute-0 sudo[239641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:41 compute-0 sudo[239641]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:41 compute-0 sudo[239686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:26:41 compute-0 sudo[239686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:26:41 compute-0 sudo[239686]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:26:41
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.mgr', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.data']
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:26:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:41 compute-0 python3.9[239836]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 05:26:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:26:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:26:42 compute-0 ceph-mon[75176]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:42 compute-0 sudo[239992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plyomleywmzoepzdqjcfhrhizynnfnug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394002.495199-688-44241182358364/AnsiballZ_file.py'
Nov 29 05:26:42 compute-0 sudo[239992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:43 compute-0 python3.9[239994]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:43 compute-0 sudo[239992]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:44 compute-0 sudo[240144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnqiwlfjpaazwgrflpwlpzaqubgveggz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394003.6307712-699-48468941817633/AnsiballZ_systemd_service.py'
Nov 29 05:26:44 compute-0 sudo[240144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:44 compute-0 python3.9[240146]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 05:26:44 compute-0 systemd[1]: Reloading.
Nov 29 05:26:44 compute-0 systemd-sysv-generator[240177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:26:44 compute-0 systemd-rc-local-generator[240172]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:26:44 compute-0 sudo[240144]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:44 compute-0 ceph-mon[75176]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:26:45 compute-0 python3.9[240331]: ansible-ansible.builtin.service_facts Invoked
Nov 29 05:26:45 compute-0 network[240348]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 05:26:45 compute-0 network[240349]: 'network-scripts' will be removed from distribution in near future.
Nov 29 05:26:45 compute-0 network[240350]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 05:26:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:46 compute-0 ceph-mon[75176]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:47 compute-0 sshd-session[239964]: Invalid user terraria from 101.47.141.125 port 56908
Nov 29 05:26:47 compute-0 sshd-session[239964]: Received disconnect from 101.47.141.125 port 56908:11: Bye Bye [preauth]
Nov 29 05:26:47 compute-0 sshd-session[239964]: Disconnected from invalid user terraria 101.47.141.125 port 56908 [preauth]
Nov 29 05:26:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:48 compute-0 podman[240444]: 2025-11-29 05:26:48.707115001 +0000 UTC m=+0.097715134 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:26:48 compute-0 ceph-mon[75176]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:26:50 compute-0 sudo[240642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgionuekvphbwyefyfoepdxrhzouxxpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394009.7778914-718-48708159005220/AnsiballZ_systemd_service.py'
Nov 29 05:26:50 compute-0 sudo[240642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:50 compute-0 python3.9[240644]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:26:50 compute-0 sudo[240642]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:50 compute-0 sshd[190545]: drop connection #0 from [120.48.175.69]:34704 on [38.102.83.17]:22 penalty: exceeded LoginGraceTime
Nov 29 05:26:50 compute-0 ceph-mon[75176]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:50 compute-0 sudo[240795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsoetlwwbtnrlyjcxtnimtstinbsdrwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394010.5852265-718-31527744311513/AnsiballZ_systemd_service.py'
Nov 29 05:26:50 compute-0 sudo[240795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:51 compute-0 python3.9[240797]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:26:51 compute-0 sudo[240795]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:26:51 compute-0 sudo[240948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnujnbeufgqhuplcxhokfdmvlilxfwqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394011.2921937-718-150530326898270/AnsiballZ_systemd_service.py'
Nov 29 05:26:51 compute-0 sudo[240948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:51 compute-0 python3.9[240950]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:26:51 compute-0 sudo[240948]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:52 compute-0 sudo[241101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iawjgoonpykuqywlvhoazlgfrstzxmsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394012.0536811-718-260975454188092/AnsiballZ_systemd_service.py'
Nov 29 05:26:52 compute-0 sudo[241101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:52 compute-0 python3.9[241103]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:26:52 compute-0 sudo[241101]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:52 compute-0 ceph-mon[75176]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:53 compute-0 sudo[241254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuotairrxubpmimkgbllfqqesbdmfysa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394012.7984629-718-259361943877133/AnsiballZ_systemd_service.py'
Nov 29 05:26:53 compute-0 sudo[241254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:53 compute-0 python3.9[241256]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:26:53 compute-0 sudo[241254]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:53 compute-0 sudo[241407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szvwohqnninehwdxnqquyhzgafberpfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394013.617855-718-92137377905199/AnsiballZ_systemd_service.py'
Nov 29 05:26:53 compute-0 sudo[241407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:54 compute-0 python3.9[241409]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:26:54 compute-0 sudo[241407]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:54 compute-0 ceph-mon[75176]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:26:54 compute-0 sudo[241560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imdpnowvssncoukhykjwillmgdbjcwci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394014.4509113-718-154869468540533/AnsiballZ_systemd_service.py'
Nov 29 05:26:54 compute-0 sudo[241560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:55 compute-0 python3.9[241562]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:26:55 compute-0 sudo[241560]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:55 compute-0 sudo[241713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fepvrbqjzcdwveawfluqemwfnwzhoxfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394015.407239-718-53636747881166/AnsiballZ_systemd_service.py'
Nov 29 05:26:55 compute-0 sudo[241713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:56 compute-0 python3.9[241715]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:26:56 compute-0 sudo[241713]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:56 compute-0 sudo[241866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvgsdilrvnidedarhdjfngjwqkhjbvyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394016.4484863-777-17904169422291/AnsiballZ_file.py'
Nov 29 05:26:56 compute-0 sudo[241866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:57 compute-0 ceph-mon[75176]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:57 compute-0 python3.9[241868]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:57 compute-0 sudo[241866]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:58 compute-0 sudo[242018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewkaeplfxbcdjkinomjoiheksurzgnwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394017.8679142-777-93721074343499/AnsiballZ_file.py'
Nov 29 05:26:58 compute-0 sudo[242018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:58 compute-0 python3.9[242020]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:58 compute-0 sudo[242018]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:58 compute-0 ceph-mon[75176]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:58 compute-0 sudo[242170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xixqonpukgciseggeovpueyfphkwaddv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394018.4700823-777-113542775197049/AnsiballZ_file.py'
Nov 29 05:26:58 compute-0 sudo[242170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:58 compute-0 python3.9[242172]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:58 compute-0 sudo[242170]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:59 compute-0 podman[242173]: 2025-11-29 05:26:59.013492826 +0000 UTC m=+0.068524878 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 29 05:26:59 compute-0 sudo[242342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyekmvldgpcyprmnhcqxeyxkgkkrfvfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394019.0919883-777-233728012270002/AnsiballZ_file.py'
Nov 29 05:26:59 compute-0 sudo[242342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:26:59 compute-0 python3.9[242344]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:26:59 compute-0 sudo[242342]: pam_unix(sudo:session): session closed for user root
Nov 29 05:26:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:26:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:00 compute-0 sudo[242494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paiqgfmroqcbeddtutyulnmgtqwfjfwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394019.7563486-777-37751678651274/AnsiballZ_file.py'
Nov 29 05:27:00 compute-0 sudo[242494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:00 compute-0 python3.9[242496]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:00 compute-0 sudo[242494]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:00 compute-0 sudo[242646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpbmccwrjdnxudoqwvkdkngfmhwuujxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394020.3357177-777-138376022850452/AnsiballZ_file.py'
Nov 29 05:27:00 compute-0 sudo[242646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:00 compute-0 ceph-mon[75176]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:00 compute-0 python3.9[242648]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:00 compute-0 sudo[242646]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:01 compute-0 anacron[34133]: Job `cron.daily' started
Nov 29 05:27:01 compute-0 anacron[34133]: Job `cron.daily' terminated
Nov 29 05:27:01 compute-0 sudo[242800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imoqhzfqzznsffrjazztynbhihmxvmrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394020.9631941-777-249457701866693/AnsiballZ_file.py'
Nov 29 05:27:01 compute-0 sudo[242800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:01 compute-0 python3.9[242802]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:01 compute-0 sudo[242800]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:02 compute-0 sudo[242952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-claczaqnkwpjvbqtgzfrvkzzaruawozc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394021.6624048-777-168781516649100/AnsiballZ_file.py'
Nov 29 05:27:02 compute-0 sudo[242952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:02 compute-0 python3.9[242954]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:02 compute-0 sudo[242952]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:02 compute-0 ceph-mon[75176]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:02 compute-0 sudo[243104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dptwaidqxqnkloxyyxdmcroippggdimx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394022.4644074-834-240198954978211/AnsiballZ_file.py'
Nov 29 05:27:02 compute-0 sudo[243104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:02 compute-0 python3.9[243106]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:03 compute-0 sudo[243104]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:03 compute-0 sudo[243256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sogstztmiytdgnnewphldvosmzxdvqpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394023.1725883-834-218401075092880/AnsiballZ_file.py'
Nov 29 05:27:03 compute-0 sudo[243256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:03 compute-0 python3.9[243258]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:03 compute-0 sudo[243256]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:04 compute-0 sudo[243408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkbvlcrhssbxkfariksuselaufbkycfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394024.0597289-834-124269050608846/AnsiballZ_file.py'
Nov 29 05:27:04 compute-0 sudo[243408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:04 compute-0 python3.9[243410]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:04 compute-0 sudo[243408]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:04 compute-0 ceph-mon[75176]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:05 compute-0 sudo[243560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afthakzbqccqqanglaehvsbkkzqerfpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394024.8112738-834-163964424244705/AnsiballZ_file.py'
Nov 29 05:27:05 compute-0 sudo[243560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:05 compute-0 python3.9[243562]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:05 compute-0 sudo[243560]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:05 compute-0 sudo[243712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpfdwctaergmzkezngmwreuffjbhpolz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394025.4540288-834-48928459853365/AnsiballZ_file.py'
Nov 29 05:27:05 compute-0 sudo[243712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:05 compute-0 python3.9[243714]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:05 compute-0 sudo[243712]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:06 compute-0 sudo[243864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pykqilpcvdtrcczptoxmyrvbgcparxqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394026.1483395-834-79886366013235/AnsiballZ_file.py'
Nov 29 05:27:06 compute-0 sudo[243864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:06 compute-0 python3.9[243866]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:06 compute-0 sudo[243864]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:06 compute-0 ceph-mon[75176]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:07 compute-0 sudo[244016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riyzmzdngwiwqdpxjxedekklswsjleth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394026.8443658-834-260182087639468/AnsiballZ_file.py'
Nov 29 05:27:07 compute-0 sudo[244016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:07 compute-0 python3.9[244018]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:07 compute-0 sudo[244016]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:07 compute-0 sudo[244169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxrorwybobnhoshtfccxcteeryxjamnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394027.4669702-834-213573923755761/AnsiballZ_file.py'
Nov 29 05:27:07 compute-0 sudo[244169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:08 compute-0 python3.9[244171]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:08 compute-0 sudo[244169]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:08 compute-0 sudo[244339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxeanedsdbgaddyajwzhdqtihngjymmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394028.3951898-892-92087448611136/AnsiballZ_command.py'
Nov 29 05:27:08 compute-0 sudo[244339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:08 compute-0 ceph-mon[75176]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:08 compute-0 podman[244295]: 2025-11-29 05:27:08.796452517 +0000 UTC m=+0.099260770 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 05:27:08 compute-0 python3.9[244346]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:27:09 compute-0 sudo[244339]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:10 compute-0 python3.9[244502]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 05:27:10 compute-0 sshd[190545]: drop connection #0 from [120.48.175.69]:38524 on [38.102.83.17]:22 penalty: exceeded LoginGraceTime
Nov 29 05:27:10 compute-0 sudo[244652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaqtfstofdgyfbyoixelixqykgrgzclk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394030.2814627-910-270996753103502/AnsiballZ_systemd_service.py'
Nov 29 05:27:10 compute-0 sudo[244652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:10 compute-0 ceph-mon[75176]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:10 compute-0 python3.9[244654]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 05:27:10 compute-0 systemd[1]: Reloading.
Nov 29 05:27:11 compute-0 systemd-rc-local-generator[244684]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:27:11 compute-0 systemd-sysv-generator[244688]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:27:11 compute-0 sudo[244652]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:27:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:27:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:27:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:27:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:27:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:27:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:11 compute-0 sudo[244839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksqidvdpjwboscgzuwwmeveobzwjheiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394031.5529888-918-164995810505500/AnsiballZ_command.py'
Nov 29 05:27:11 compute-0 sudo[244839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:12 compute-0 python3.9[244841]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:27:12 compute-0 sudo[244839]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:12 compute-0 sudo[244992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uanmzorwwljlbrjbnnyspskdqaverrbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394032.3757527-918-113380436701451/AnsiballZ_command.py'
Nov 29 05:27:12 compute-0 sudo[244992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:12 compute-0 ceph-mon[75176]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:12 compute-0 python3.9[244994]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:27:13 compute-0 sudo[244992]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:13 compute-0 sudo[245145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsriwhgxhqkxlxjbhbhewlhqetkxuhfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394033.2198882-918-76676767826754/AnsiballZ_command.py'
Nov 29 05:27:13 compute-0 sudo[245145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:27:13.737 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:27:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:27:13.739 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:27:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:27:13.739 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:27:13 compute-0 python3.9[245147]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:27:13 compute-0 sudo[245145]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:14 compute-0 sudo[245298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbxxlqvoqfanmqbroiylynuprcxonvuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394034.0556078-918-190046192106003/AnsiballZ_command.py'
Nov 29 05:27:14 compute-0 sudo[245298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:14 compute-0 python3.9[245300]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:27:14 compute-0 sudo[245298]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:14 compute-0 ceph-mon[75176]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:15 compute-0 sudo[245451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wutuunsuewsiysyyonfyxgvqkgxpdxqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394034.7724223-918-130660741444376/AnsiballZ_command.py'
Nov 29 05:27:15 compute-0 sudo[245451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:15 compute-0 python3.9[245453]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:27:15 compute-0 sudo[245451]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:15 compute-0 sudo[245604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfbxtcpauheosissqijpsftjduvcirij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394035.4128542-918-15650017423773/AnsiballZ_command.py'
Nov 29 05:27:15 compute-0 sudo[245604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:15 compute-0 python3.9[245606]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:27:15 compute-0 sudo[245604]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:16 compute-0 sudo[245757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dikpokthjblssgidggphtrlttwwvjxom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394036.0742197-918-63201122198072/AnsiballZ_command.py'
Nov 29 05:27:16 compute-0 sudo[245757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:16 compute-0 python3.9[245759]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:27:16 compute-0 sudo[245757]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:16 compute-0 ceph-mon[75176]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:17 compute-0 sudo[245910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqkbybceidbokgppuuvhoevfobrszhwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394036.887935-918-139574918424398/AnsiballZ_command.py'
Nov 29 05:27:17 compute-0 sudo[245910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:17 compute-0 python3.9[245912]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 05:27:17 compute-0 sudo[245910]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:18 compute-0 ceph-mon[75176]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:18 compute-0 sudo[246075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djeibtphgafcuwooxjenmyakzjnwhrgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394038.5545607-997-129812540403514/AnsiballZ_file.py'
Nov 29 05:27:18 compute-0 sudo[246075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:18 compute-0 podman[246037]: 2025-11-29 05:27:18.95625304 +0000 UTC m=+0.098260387 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 05:27:19 compute-0 python3.9[246083]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:19 compute-0 sudo[246075]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:19 compute-0 sudo[246234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txoocrmjjzcpyddkogyucnmuxpxzrjwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394039.3829837-997-131485449738487/AnsiballZ_file.py'
Nov 29 05:27:19 compute-0 sudo[246234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:20 compute-0 python3.9[246236]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:20 compute-0 sudo[246234]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:20 compute-0 sudo[246386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlhzdkmgbohqwkhhjlxioidzkfpgcxwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394040.1979933-997-105151622617426/AnsiballZ_file.py'
Nov 29 05:27:20 compute-0 sudo[246386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:20 compute-0 python3.9[246388]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:20 compute-0 sudo[246386]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:20 compute-0 ceph-mon[75176]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:21 compute-0 sudo[246538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqdwjmvdhqhthoxzfmokveidjildyyvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394040.8604684-1019-67569842414433/AnsiballZ_file.py'
Nov 29 05:27:21 compute-0 sudo[246538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:21 compute-0 python3.9[246540]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:21 compute-0 sudo[246538]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:21 compute-0 sudo[246690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yknvozqfkgnqkoohyrpmtjrmrdqqukry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394041.6192148-1019-123068646002271/AnsiballZ_file.py'
Nov 29 05:27:21 compute-0 sudo[246690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:22 compute-0 python3.9[246692]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:22 compute-0 sudo[246690]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:22 compute-0 sudo[246842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypefgpycrpbofsdxxsgiehmytggvdwez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394042.371608-1019-277290539188682/AnsiballZ_file.py'
Nov 29 05:27:22 compute-0 sudo[246842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:22 compute-0 ceph-mon[75176]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:22 compute-0 python3.9[246844]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:22 compute-0 sudo[246842]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:23 compute-0 sudo[246996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmngtokveoblpyyulenovrcchefripvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394043.1011336-1019-88317070404292/AnsiballZ_file.py'
Nov 29 05:27:23 compute-0 sudo[246996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:23 compute-0 python3.9[246998]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:23 compute-0 sudo[246996]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:24 compute-0 sshd-session[246944]: Received disconnect from 193.46.255.99 port 39834:11:  [preauth]
Nov 29 05:27:24 compute-0 sshd-session[246944]: Disconnected from authenticating user root 193.46.255.99 port 39834 [preauth]
Nov 29 05:27:24 compute-0 sudo[247150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akqwukqovewbwmhvyhhrczkjwyrzxbpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394044.0976229-1019-45354226584484/AnsiballZ_file.py'
Nov 29 05:27:24 compute-0 sudo[247150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:24 compute-0 python3.9[247152]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:24 compute-0 sudo[247150]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:24 compute-0 ceph-mon[75176]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:25 compute-0 sudo[247302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxbkhccbmfkqtvxqphupznjapacsrzlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394044.810597-1019-235780054386785/AnsiballZ_file.py'
Nov 29 05:27:25 compute-0 sudo[247302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:25 compute-0 sshd-session[247115]: Invalid user khan from 152.32.145.111 port 51630
Nov 29 05:27:25 compute-0 sshd-session[247115]: Received disconnect from 152.32.145.111 port 51630:11: Bye Bye [preauth]
Nov 29 05:27:25 compute-0 sshd-session[247115]: Disconnected from invalid user khan 152.32.145.111 port 51630 [preauth]
Nov 29 05:27:25 compute-0 python3.9[247304]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:25 compute-0 sudo[247302]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:25 compute-0 sudo[247454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bljeeiusfftvdwmzsyxymrvamwwtbzez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394045.5554857-1019-67266422894162/AnsiballZ_file.py'
Nov 29 05:27:25 compute-0 sudo[247454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:26 compute-0 python3.9[247456]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:26 compute-0 sudo[247454]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:26 compute-0 ceph-mon[75176]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:28 compute-0 ceph-mon[75176]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:29 compute-0 sshd[190545]: drop connection #1 from [120.48.175.69]:42496 on [38.102.83.17]:22 penalty: exceeded LoginGraceTime
Nov 29 05:27:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:30 compute-0 podman[247483]: 2025-11-29 05:27:30.049635741 +0000 UTC m=+0.101696970 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:27:30 compute-0 sshd-session[247481]: Received disconnect from 45.120.216.232 port 44310:11: Bye Bye [preauth]
Nov 29 05:27:30 compute-0 sshd-session[247481]: Disconnected from authenticating user root 45.120.216.232 port 44310 [preauth]
Nov 29 05:27:30 compute-0 ceph-mon[75176]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:31 compute-0 sudo[247625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rguvamfvvxiwgstbszuyqlnacqnputia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394051.1968832-1208-251083901209802/AnsiballZ_getent.py'
Nov 29 05:27:31 compute-0 sudo[247625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:31 compute-0 python3.9[247627]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 05:27:31 compute-0 sudo[247625]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:32 compute-0 sudo[247778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afjfawlcacijnxbnklzokfuimlvroftq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394052.1905293-1216-138883737409904/AnsiballZ_group.py'
Nov 29 05:27:32 compute-0 sudo[247778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:32 compute-0 python3.9[247780]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 05:27:32 compute-0 groupadd[247781]: group added to /etc/group: name=nova, GID=42436
Nov 29 05:27:32 compute-0 ceph-mon[75176]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:32 compute-0 groupadd[247781]: group added to /etc/gshadow: name=nova
Nov 29 05:27:32 compute-0 groupadd[247781]: new group: name=nova, GID=42436
Nov 29 05:27:32 compute-0 sudo[247778]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:33 compute-0 sudo[247936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbipyvtlgcebljffaaaizrephjsuutvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394053.19111-1224-118935886069261/AnsiballZ_user.py'
Nov 29 05:27:33 compute-0 sudo[247936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:33 compute-0 python3.9[247938]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 05:27:34 compute-0 useradd[247940]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 29 05:27:34 compute-0 useradd[247940]: add 'nova' to group 'libvirt'
Nov 29 05:27:34 compute-0 useradd[247940]: add 'nova' to shadow group 'libvirt'
Nov 29 05:27:34 compute-0 sudo[247936]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.835877) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054835934, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1252, "num_deletes": 505, "total_data_size": 1470240, "memory_usage": 1499984, "flush_reason": "Manual Compaction"}
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054853014, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1456348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13575, "largest_seqno": 14826, "table_properties": {"data_size": 1450796, "index_size": 2500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14069, "raw_average_key_size": 17, "raw_value_size": 1437755, "raw_average_value_size": 1826, "num_data_blocks": 114, "num_entries": 787, "num_filter_entries": 787, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764393955, "oldest_key_time": 1764393955, "file_creation_time": 1764394054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 17189 microseconds, and 7705 cpu microseconds.
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.853072) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1456348 bytes OK
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.853099) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.855126) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.855171) EVENT_LOG_v1 {"time_micros": 1764394054855161, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.855194) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1463515, prev total WAL file size 1463515, number of live WAL files 2.
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.855924) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1422KB)], [32(7468KB)]
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054855991, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9104244, "oldest_snapshot_seqno": -1}
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3783 keys, 7161207 bytes, temperature: kUnknown
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054918098, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7161207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7134134, "index_size": 16531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 92688, "raw_average_key_size": 24, "raw_value_size": 7063844, "raw_average_value_size": 1867, "num_data_blocks": 701, "num_entries": 3783, "num_filter_entries": 3783, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:27:34 compute-0 ceph-mon[75176]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.918319) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7161207 bytes
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.919806) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.5 rd, 115.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.3 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(11.2) write-amplify(4.9) OK, records in: 4806, records dropped: 1023 output_compression: NoCompression
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.919837) EVENT_LOG_v1 {"time_micros": 1764394054919813, "job": 14, "event": "compaction_finished", "compaction_time_micros": 62156, "compaction_time_cpu_micros": 33145, "output_level": 6, "num_output_files": 1, "total_output_size": 7161207, "num_input_records": 4806, "num_output_records": 3783, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054920164, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394054922490, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.855789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.922648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.922653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.922654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.922656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:27:34 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:27:34.922657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:27:34 compute-0 sshd-session[247971]: Accepted publickey for zuul from 192.168.122.30 port 52670 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:27:34 compute-0 systemd-logind[793]: New session 50 of user zuul.
Nov 29 05:27:34 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 29 05:27:35 compute-0 sshd-session[247971]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:27:35 compute-0 sshd-session[247974]: Received disconnect from 192.168.122.30 port 52670:11: disconnected by user
Nov 29 05:27:35 compute-0 sshd-session[247974]: Disconnected from user zuul 192.168.122.30 port 52670
Nov 29 05:27:35 compute-0 sshd-session[247971]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:27:35 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 05:27:35 compute-0 systemd-logind[793]: Session 50 logged out. Waiting for processes to exit.
Nov 29 05:27:35 compute-0 systemd-logind[793]: Removed session 50.
Nov 29 05:27:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:35 compute-0 python3.9[248124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:27:36 compute-0 python3.9[248245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394055.3405144-1249-256034503309412/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:36 compute-0 ceph-mon[75176]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:37 compute-0 python3.9[248395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:27:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:37 compute-0 python3.9[248471]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:38 compute-0 python3.9[248621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:27:38 compute-0 ceph-mon[75176]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:39 compute-0 podman[248692]: 2025-11-29 05:27:39.06065182 +0000 UTC m=+0.109751994 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 05:27:39 compute-0 python3.9[248768]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394058.057398-1249-116968742759637/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:40 compute-0 python3.9[248918]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:27:40 compute-0 python3.9[249039]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394059.4375298-1249-224288687081221/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:40 compute-0 ceph-mon[75176]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:41 compute-0 sudo[249139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:27:41 compute-0 sudo[249139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:41 compute-0 sudo[249139]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:27:41
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.log', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr']
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:27:41 compute-0 sudo[249182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:27:41 compute-0 sudo[249182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:41 compute-0 sudo[249182]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:27:41 compute-0 sudo[249224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:27:41 compute-0 sudo[249224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:41 compute-0 sudo[249224]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:27:41 compute-0 sudo[249265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:27:41 compute-0 sudo[249265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:27:41 compute-0 python3.9[249256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:27:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:41 compute-0 sudo[249265]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:27:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:27:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:27:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:27:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:27:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:27:42 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev c1e97b11-93db-4f0b-b7e5-a78e881454c1 does not exist
Nov 29 05:27:42 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev f405689a-a342-4382-bba0-4224ba2e4503 does not exist
Nov 29 05:27:42 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 85ce1be6-5335-4867-a5e8-3b03b25fc6a8 does not exist
Nov 29 05:27:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:27:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:27:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:27:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:27:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:27:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:27:42 compute-0 sudo[249443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:27:42 compute-0 sudo[249443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:42 compute-0 sudo[249443]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:42 compute-0 python3.9[249442]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394061.0879936-1249-252044331660310/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:42 compute-0 sudo[249468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:27:42 compute-0 sudo[249468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:42 compute-0 sudo[249468]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:42 compute-0 sudo[249493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:27:42 compute-0 sudo[249493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:42 compute-0 sudo[249493]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:42 compute-0 sudo[249542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:27:42 compute-0 sudo[249542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:42 compute-0 podman[249698]: 2025-11-29 05:27:42.63004865 +0000 UTC m=+0.066399147 container create 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:27:42 compute-0 systemd[1]: Started libpod-conmon-5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72.scope.
Nov 29 05:27:42 compute-0 podman[249698]: 2025-11-29 05:27:42.601223063 +0000 UTC m=+0.037573620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:27:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:27:42 compute-0 podman[249698]: 2025-11-29 05:27:42.724034412 +0000 UTC m=+0.160384919 container init 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:27:42 compute-0 podman[249698]: 2025-11-29 05:27:42.732025165 +0000 UTC m=+0.168375642 container start 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:27:42 compute-0 podman[249698]: 2025-11-29 05:27:42.735508619 +0000 UTC m=+0.171859096 container attach 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:27:42 compute-0 objective_germain[249748]: 167 167
Nov 29 05:27:42 compute-0 podman[249698]: 2025-11-29 05:27:42.738554892 +0000 UTC m=+0.174905359 container died 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:27:42 compute-0 systemd[1]: libpod-5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72.scope: Deactivated successfully.
Nov 29 05:27:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bfa14f56164b59930fb2e5730afa5a7d9bb7b24e99fb8f4facf52591b966bcf-merged.mount: Deactivated successfully.
Nov 29 05:27:42 compute-0 podman[249698]: 2025-11-29 05:27:42.779853101 +0000 UTC m=+0.216203558 container remove 5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 05:27:42 compute-0 systemd[1]: libpod-conmon-5134313426eafdc852d17c9ec52a84621502c463ab307eb989dcde3b00ab5b72.scope: Deactivated successfully.
Nov 29 05:27:42 compute-0 python3.9[249745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:27:42 compute-0 podman[249778]: 2025-11-29 05:27:42.949726788 +0000 UTC m=+0.043479152 container create ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:27:42 compute-0 ceph-mon[75176]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:27:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:27:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:27:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:27:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:27:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:27:42 compute-0 systemd[1]: Started libpod-conmon-ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21.scope.
Nov 29 05:27:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:43 compute-0 podman[249778]: 2025-11-29 05:27:42.935668618 +0000 UTC m=+0.029421012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:27:43 compute-0 podman[249778]: 2025-11-29 05:27:43.055024923 +0000 UTC m=+0.148777297 container init ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:27:43 compute-0 podman[249778]: 2025-11-29 05:27:43.06649342 +0000 UTC m=+0.160245784 container start ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:27:43 compute-0 podman[249778]: 2025-11-29 05:27:43.069967695 +0000 UTC m=+0.163720119 container attach ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:27:43 compute-0 python3.9[249913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394062.3033223-1249-56338350063723/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:44 compute-0 sudo[250082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmjdwnlderkbutegcbpnzktshekxqpvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394063.6982858-1332-134102669779908/AnsiballZ_file.py'
Nov 29 05:27:44 compute-0 sudo[250082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:44 compute-0 exciting_rubin[249834]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:27:44 compute-0 exciting_rubin[249834]: --> relative data size: 1.0
Nov 29 05:27:44 compute-0 exciting_rubin[249834]: --> All data devices are unavailable
Nov 29 05:27:44 compute-0 systemd[1]: libpod-ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21.scope: Deactivated successfully.
Nov 29 05:27:44 compute-0 systemd[1]: libpod-ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21.scope: Consumed 1.079s CPU time.
Nov 29 05:27:44 compute-0 podman[249778]: 2025-11-29 05:27:44.211702145 +0000 UTC m=+1.305454549 container died ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:27:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb0d1114e412c4f8799e50aef7d804078a02eecc941534f4cb01f9b4f8b488ac-merged.mount: Deactivated successfully.
Nov 29 05:27:44 compute-0 podman[249778]: 2025-11-29 05:27:44.289071557 +0000 UTC m=+1.382823931 container remove ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_rubin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:27:44 compute-0 systemd[1]: libpod-conmon-ee6b8c75f12c738eb0eec52a2bc08adef530600e22fa8ee804f14a534a293e21.scope: Deactivated successfully.
Nov 29 05:27:44 compute-0 python3.9[250085]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:44 compute-0 sudo[249542]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:44 compute-0 sudo[250082]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:44 compute-0 sudo[250104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:27:44 compute-0 sudo[250104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:44 compute-0 sudo[250104]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:44 compute-0 sudo[250153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:27:44 compute-0 sudo[250153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:44 compute-0 sudo[250153]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:44 compute-0 sudo[250178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:27:44 compute-0 sudo[250178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:44 compute-0 sudo[250178]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:44 compute-0 sudo[250227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:27:44 compute-0 sudo[250227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:44 compute-0 sudo[250388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umlxvscqdenxmzjmtkyddstfjtpduvrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394064.5480156-1340-111720876111547/AnsiballZ_copy.py'
Nov 29 05:27:44 compute-0 sudo[250388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:44 compute-0 ceph-mon[75176]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:45 compute-0 podman[250397]: 2025-11-29 05:27:45.012032564 +0000 UTC m=+0.042848138 container create 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:27:45 compute-0 systemd[1]: Started libpod-conmon-8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c.scope.
Nov 29 05:27:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:27:45 compute-0 podman[250397]: 2025-11-29 05:27:45.089188719 +0000 UTC m=+0.120004373 container init 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 29 05:27:45 compute-0 podman[250397]: 2025-11-29 05:27:44.995524414 +0000 UTC m=+0.026340008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:27:45 compute-0 podman[250397]: 2025-11-29 05:27:45.095383818 +0000 UTC m=+0.126199392 container start 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 05:27:45 compute-0 podman[250397]: 2025-11-29 05:27:45.098057553 +0000 UTC m=+0.128873217 container attach 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:27:45 compute-0 eager_bouman[250413]: 167 167
Nov 29 05:27:45 compute-0 systemd[1]: libpod-8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c.scope: Deactivated successfully.
Nov 29 05:27:45 compute-0 podman[250397]: 2025-11-29 05:27:45.100685697 +0000 UTC m=+0.131501291 container died 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:27:45 compute-0 python3.9[250395]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:27:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e59698ba24ba4753a25dcf3f4169eeb29eca7cbf325c3f6a12b9f9c1f32373d-merged.mount: Deactivated successfully.
Nov 29 05:27:45 compute-0 podman[250397]: 2025-11-29 05:27:45.138071411 +0000 UTC m=+0.168886995 container remove 8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:27:45 compute-0 sudo[250388]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:45 compute-0 systemd[1]: libpod-conmon-8d98f443afe7946b6454a3c91fff303a78ec5bdee0196883ce46dac0ea53159c.scope: Deactivated successfully.
Nov 29 05:27:45 compute-0 podman[250461]: 2025-11-29 05:27:45.293100649 +0000 UTC m=+0.043323859 container create 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:27:45 compute-0 systemd[1]: Started libpod-conmon-41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb.scope.
Nov 29 05:27:45 compute-0 podman[250461]: 2025-11-29 05:27:45.274991031 +0000 UTC m=+0.025214221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:27:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69cc3a79a7ce6e5ec932b8b7ad03feddb79a027289a8c683f83168c1494fd80a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69cc3a79a7ce6e5ec932b8b7ad03feddb79a027289a8c683f83168c1494fd80a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69cc3a79a7ce6e5ec932b8b7ad03feddb79a027289a8c683f83168c1494fd80a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69cc3a79a7ce6e5ec932b8b7ad03feddb79a027289a8c683f83168c1494fd80a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:45 compute-0 podman[250461]: 2025-11-29 05:27:45.404484771 +0000 UTC m=+0.154708051 container init 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:27:45 compute-0 podman[250461]: 2025-11-29 05:27:45.418558401 +0000 UTC m=+0.168781611 container start 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:27:45 compute-0 podman[250461]: 2025-11-29 05:27:45.423211494 +0000 UTC m=+0.173434704 container attach 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:27:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:45 compute-0 sudo[250607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqxzdfcgkgbziwnaplyryjidhaszezrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394065.3354218-1348-264274926723110/AnsiballZ_stat.py'
Nov 29 05:27:45 compute-0 sudo[250607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:45 compute-0 python3.9[250609]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:27:45 compute-0 sudo[250607]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:46 compute-0 condescending_knuth[250500]: {
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:     "0": [
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:         {
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "devices": [
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "/dev/loop3"
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             ],
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_name": "ceph_lv0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_size": "21470642176",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "name": "ceph_lv0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "tags": {
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.cluster_name": "ceph",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.crush_device_class": "",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.encrypted": "0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.osd_id": "0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.type": "block",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.vdo": "0"
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             },
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "type": "block",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "vg_name": "ceph_vg0"
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:         }
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:     ],
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:     "1": [
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:         {
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "devices": [
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "/dev/loop4"
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             ],
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_name": "ceph_lv1",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_size": "21470642176",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "name": "ceph_lv1",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "tags": {
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.cluster_name": "ceph",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.crush_device_class": "",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.encrypted": "0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.osd_id": "1",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.type": "block",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.vdo": "0"
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             },
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "type": "block",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "vg_name": "ceph_vg1"
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:         }
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:     ],
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:     "2": [
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:         {
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "devices": [
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "/dev/loop5"
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             ],
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_name": "ceph_lv2",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_size": "21470642176",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "name": "ceph_lv2",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "tags": {
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.cluster_name": "ceph",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.crush_device_class": "",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.encrypted": "0",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.osd_id": "2",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.type": "block",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:                 "ceph.vdo": "0"
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             },
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "type": "block",
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:             "vg_name": "ceph_vg2"
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:         }
Nov 29 05:27:46 compute-0 condescending_knuth[250500]:     ]
Nov 29 05:27:46 compute-0 condescending_knuth[250500]: }
Nov 29 05:27:46 compute-0 systemd[1]: libpod-41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb.scope: Deactivated successfully.
Nov 29 05:27:46 compute-0 podman[250651]: 2025-11-29 05:27:46.226214786 +0000 UTC m=+0.026837939 container died 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:27:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-69cc3a79a7ce6e5ec932b8b7ad03feddb79a027289a8c683f83168c1494fd80a-merged.mount: Deactivated successfully.
Nov 29 05:27:46 compute-0 podman[250651]: 2025-11-29 05:27:46.290670174 +0000 UTC m=+0.091293317 container remove 41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:27:46 compute-0 systemd[1]: libpod-conmon-41786087ab743f4f2267848e9250b706e4ec9b64d7aa6af7c5bc34678f7f24fb.scope: Deactivated successfully.
Nov 29 05:27:46 compute-0 sudo[250227]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:46 compute-0 sudo[250709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:27:46 compute-0 sudo[250709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:46 compute-0 sudo[250709]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:46 compute-0 sshd[190545]: drop connection #0 from [120.48.175.69]:46376 on [38.102.83.17]:22 penalty: exceeded LoginGraceTime
Nov 29 05:27:46 compute-0 sudo[250757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:27:46 compute-0 sudo[250757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:46 compute-0 sudo[250757]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:46 compute-0 sudo[250841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-expmecjsawsuipubeyfozjywyfdzoqst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394066.2151036-1356-241854222022565/AnsiballZ_stat.py'
Nov 29 05:27:46 compute-0 sudo[250841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:46 compute-0 sudo[250812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:27:46 compute-0 sudo[250812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:46 compute-0 sudo[250812]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:46 compute-0 sudo[250856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:27:46 compute-0 sudo[250856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:46 compute-0 python3.9[250853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:27:46 compute-0 sudo[250841]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:46 compute-0 ceph-mon[75176]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:47 compute-0 podman[250987]: 2025-11-29 05:27:47.113317032 +0000 UTC m=+0.070242999 container create 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 29 05:27:47 compute-0 systemd[1]: Started libpod-conmon-9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68.scope.
Nov 29 05:27:47 compute-0 podman[250987]: 2025-11-29 05:27:47.082074426 +0000 UTC m=+0.039000433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:27:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:27:47 compute-0 podman[250987]: 2025-11-29 05:27:47.23155203 +0000 UTC m=+0.188477997 container init 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:27:47 compute-0 podman[250987]: 2025-11-29 05:27:47.244804871 +0000 UTC m=+0.201730828 container start 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:27:47 compute-0 podman[250987]: 2025-11-29 05:27:47.250213321 +0000 UTC m=+0.207139298 container attach 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:27:47 compute-0 wizardly_feynman[251032]: 167 167
Nov 29 05:27:47 compute-0 systemd[1]: libpod-9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68.scope: Deactivated successfully.
Nov 29 05:27:47 compute-0 podman[250987]: 2025-11-29 05:27:47.253842009 +0000 UTC m=+0.210767936 container died 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 05:27:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9dfe395a33480fa2f65789d3010a63d750f6869dec0c0e19adb8388d1edd4e63-merged.mount: Deactivated successfully.
Nov 29 05:27:47 compute-0 podman[250987]: 2025-11-29 05:27:47.29360597 +0000 UTC m=+0.250531927 container remove 9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:27:47 compute-0 systemd[1]: libpod-conmon-9a6600ae3e1e7aca2e20a6e197fe65094942c97eb7cb37e94da7f1186c730d68.scope: Deactivated successfully.
Nov 29 05:27:47 compute-0 sudo[251073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olqnxavpczpijihyktiaownzsxpnrvkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394066.2151036-1356-241854222022565/AnsiballZ_copy.py'
Nov 29 05:27:47 compute-0 sudo[251073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:47 compute-0 python3.9[251077]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764394066.2151036-1356-241854222022565/.source _original_basename=.6k0tq8qg follow=False checksum=bf754058a6438a797db5195aacffe88f31464064 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 05:27:47 compute-0 podman[251083]: 2025-11-29 05:27:47.544547997 +0000 UTC m=+0.071400317 container create 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:27:47 compute-0 sudo[251073]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:47 compute-0 systemd[1]: Started libpod-conmon-4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec.scope.
Nov 29 05:27:47 compute-0 podman[251083]: 2025-11-29 05:27:47.513565558 +0000 UTC m=+0.040417958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:27:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d28c498e98f8ef4074540850767c252f548a8617d0f497e52e9c3659c21aa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d28c498e98f8ef4074540850767c252f548a8617d0f497e52e9c3659c21aa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d28c498e98f8ef4074540850767c252f548a8617d0f497e52e9c3659c21aa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38d28c498e98f8ef4074540850767c252f548a8617d0f497e52e9c3659c21aa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:27:47 compute-0 podman[251083]: 2025-11-29 05:27:47.666846403 +0000 UTC m=+0.193698713 container init 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 05:27:47 compute-0 podman[251083]: 2025-11-29 05:27:47.678420463 +0000 UTC m=+0.205272773 container start 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 05:27:47 compute-0 podman[251083]: 2025-11-29 05:27:47.682181184 +0000 UTC m=+0.209033484 container attach 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:27:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:48 compute-0 python3.9[251260]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]: {
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "osd_id": 0,
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "type": "bluestore"
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:     },
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "osd_id": 1,
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "type": "bluestore"
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:     },
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "osd_id": 2,
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:         "type": "bluestore"
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]:     }
Nov 29 05:27:48 compute-0 optimistic_dijkstra[251102]: }
Nov 29 05:27:48 compute-0 systemd[1]: libpod-4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec.scope: Deactivated successfully.
Nov 29 05:27:48 compute-0 podman[251083]: 2025-11-29 05:27:48.758987066 +0000 UTC m=+1.285839406 container died 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 05:27:48 compute-0 systemd[1]: libpod-4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec.scope: Consumed 1.088s CPU time.
Nov 29 05:27:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-38d28c498e98f8ef4074540850767c252f548a8617d0f497e52e9c3659c21aa3-merged.mount: Deactivated successfully.
Nov 29 05:27:48 compute-0 podman[251083]: 2025-11-29 05:27:48.831250433 +0000 UTC m=+1.358102753 container remove 4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:27:48 compute-0 systemd[1]: libpod-conmon-4396805a7a86e925cd4ee504a92321353d4c0e17f87827d52e6aa78a335f2dec.scope: Deactivated successfully.
Nov 29 05:27:48 compute-0 sudo[250856]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:27:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:27:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:27:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:27:48 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ae7626a2-fc45-4198-9949-e7f74803f0a6 does not exist
Nov 29 05:27:48 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d75533b1-82c9-465d-8dbb-adb9370ddc7b does not exist
Nov 29 05:27:48 compute-0 sudo[251347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:27:48 compute-0 sudo[251347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:48 compute-0 sudo[251347]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:48 compute-0 ceph-mon[75176]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:27:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:27:49 compute-0 sudo[251400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:27:49 compute-0 sudo[251400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:27:49 compute-0 sudo[251400]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:49 compute-0 podman[251447]: 2025-11-29 05:27:49.141982795 +0000 UTC m=+0.074004830 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 05:27:49 compute-0 python3.9[251518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:27:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:50 compute-0 python3.9[251639]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394068.8717532-1382-59214853178857/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:51 compute-0 ceph-mon[75176]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:51 compute-0 python3.9[251789]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:27:51 compute-0 python3.9[251910]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764394070.3757503-1397-195886511480855/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 05:27:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:52 compute-0 sudo[252060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqhiphwhnitwjljrufqtmnwlhajelhot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394072.0644171-1414-103885317924100/AnsiballZ_container_config_data.py'
Nov 29 05:27:52 compute-0 sudo[252060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:52 compute-0 python3.9[252062]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 05:27:52 compute-0 sudo[252060]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:53 compute-0 ceph-mon[75176]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:53 compute-0 sudo[252212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deyjudrwbyqfpgyhgqazpzqjnruxxsmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394072.94321-1423-17888600773120/AnsiballZ_container_config_hash.py'
Nov 29 05:27:53 compute-0 sudo[252212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:53 compute-0 python3.9[252214]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 05:27:53 compute-0 sudo[252212]: pam_unix(sudo:session): session closed for user root
Nov 29 05:27:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:27:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3321 writes, 14K keys, 3321 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3321 writes, 3321 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1288 writes, 5837 keys, 1288 commit groups, 1.0 writes per commit group, ingest: 8.55 MB, 0.01 MB/s
                                           Interval WAL: 1288 writes, 1288 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.6      0.15              0.06         7    0.021       0      0       0.0       0.0
                                             L6      1/0    6.83 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    147.4    121.3      0.34              0.15         6    0.057     24K   3194       0.0       0.0
                                            Sum      1/0    6.83 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6    102.2    116.2      0.49              0.22        13    0.038     24K   3194       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    118.3    119.0      0.29              0.13         8    0.036     17K   2463       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    147.4    121.3      0.34              0.15         6    0.057     24K   3194       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    105.8      0.15              0.06         6    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556a62a271f0#2 capacity: 308.00 MB usage: 1.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(103,1.36 MB,0.440989%) FilterBlock(14,74.42 KB,0.0235966%) IndexBlock(14,148.78 KB,0.0471734%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 05:27:54 compute-0 sudo[252364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vklffdbgdqujfvowkbholardtuxkbifl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764394074.0200114-1433-96768160795114/AnsiballZ_edpm_container_manage.py'
Nov 29 05:27:54 compute-0 sudo[252364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:27:54 compute-0 python3[252366]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 05:27:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:27:55 compute-0 ceph-mon[75176]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:57 compute-0 ceph-mon[75176]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:58 compute-0 ceph-mon[75176]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:27:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:01 compute-0 ceph-mon[75176]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:01 compute-0 podman[252434]: 2025-11-29 05:28:01.255674841 +0000 UTC m=+0.302721629 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 05:28:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:02 compute-0 sshd[190545]: drop connection #0 from [120.48.175.69]:50416 on [38.102.83.17]:22 penalty: exceeded LoginGraceTime
Nov 29 05:28:02 compute-0 ceph-mon[75176]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:04 compute-0 ceph-mon[75176]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:05 compute-0 podman[252380]: 2025-11-29 05:28:05.019592784 +0000 UTC m=+10.184113521 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 05:28:05 compute-0 podman[252491]: 2025-11-29 05:28:05.169427476 +0000 UTC m=+0.043768920 container create 8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 05:28:05 compute-0 podman[252491]: 2025-11-29 05:28:05.145524698 +0000 UTC m=+0.019866142 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 05:28:05 compute-0 python3[252366]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 05:28:05 compute-0 sudo[252364]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:05 compute-0 sudo[252680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lseyhsdznirbqwvupntvcddxouczipbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394085.5562243-1441-84192192892665/AnsiballZ_stat.py'
Nov 29 05:28:05 compute-0 sudo[252680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:06 compute-0 python3.9[252682]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:28:06 compute-0 sudo[252680]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:06 compute-0 ceph-mon[75176]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:07 compute-0 sudo[252834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leexpzcuoluquwapfntpegabdfegqqqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394086.6575282-1453-113586058622783/AnsiballZ_container_config_data.py'
Nov 29 05:28:07 compute-0 sudo[252834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:07 compute-0 python3.9[252836]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 05:28:07 compute-0 sudo[252834]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:07 compute-0 sudo[252986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qczanvctahzzdoljozdpgutkhxdlzikt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394087.5435877-1462-275292027119208/AnsiballZ_container_config_hash.py'
Nov 29 05:28:07 compute-0 sudo[252986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:08 compute-0 python3.9[252988]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 05:28:08 compute-0 sudo[252986]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:08 compute-0 ceph-mon[75176]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:09 compute-0 sudo[253138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hauqhtrotdklrryrdmzskxsqglmsghhu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764394088.60935-1472-121972899735769/AnsiballZ_edpm_container_manage.py'
Nov 29 05:28:09 compute-0 sudo[253138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:09 compute-0 python3[253140]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 05:28:09 compute-0 podman[253176]: 2025-11-29 05:28:09.574015535 +0000 UTC m=+0.076861439 container create 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 05:28:09 compute-0 podman[253176]: 2025-11-29 05:28:09.528083115 +0000 UTC m=+0.030929069 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 05:28:09 compute-0 python3[253140]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 05:28:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:09 compute-0 sudo[253138]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:10 compute-0 podman[253240]: 2025-11-29 05:28:10.057446922 +0000 UTC m=+0.100898880 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 05:28:10 compute-0 sudo[253391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtiqafucwqcdeaskkwznokojmuwrltnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394090.030208-1480-258523279337915/AnsiballZ_stat.py'
Nov 29 05:28:10 compute-0 sudo[253391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:10 compute-0 python3.9[253393]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:28:10 compute-0 sudo[253391]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:10 compute-0 ceph-mon[75176]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:28:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:28:11 compute-0 sudo[253545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqwmxfqoqigetobhoxlysthdgnzwhmua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394091.0021222-1489-161405757583427/AnsiballZ_file.py'
Nov 29 05:28:11 compute-0 sudo[253545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:28:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:28:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:28:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:28:11 compute-0 python3.9[253547]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:28:11 compute-0 sudo[253545]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:12 compute-0 sudo[253696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktdfxamhsbogyjdniaajcpimtbzgrtxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394091.6576428-1489-181068609803635/AnsiballZ_copy.py'
Nov 29 05:28:12 compute-0 sudo[253696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:12 compute-0 python3.9[253698]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764394091.6576428-1489-181068609803635/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 05:28:12 compute-0 sudo[253696]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:12 compute-0 sudo[253772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gigptsrurheuwfjktyfqrkongqtcxbdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394091.6576428-1489-181068609803635/AnsiballZ_systemd.py'
Nov 29 05:28:12 compute-0 sudo[253772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:12 compute-0 ceph-mon[75176]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:12 compute-0 python3.9[253774]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 05:28:12 compute-0 systemd[1]: Reloading.
Nov 29 05:28:12 compute-0 systemd-rc-local-generator[253802]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:28:12 compute-0 systemd-sysv-generator[253805]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:28:13 compute-0 sudo[253772]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:13 compute-0 sudo[253883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aypiyghrokshjtokbdvjmrjkgeulfsqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394091.6576428-1489-181068609803635/AnsiballZ_systemd.py'
Nov 29 05:28:13 compute-0 sudo[253883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:28:13.739 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:28:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:28:13.740 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:28:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:28:13.740 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:28:13 compute-0 python3.9[253885]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 05:28:14 compute-0 systemd[1]: Reloading.
Nov 29 05:28:14 compute-0 systemd-rc-local-generator[253909]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 05:28:14 compute-0 systemd-sysv-generator[253915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 05:28:14 compute-0 systemd[1]: Starting nova_compute container...
Nov 29 05:28:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:14 compute-0 podman[253924]: 2025-11-29 05:28:14.627512973 +0000 UTC m=+0.113672529 container init 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Nov 29 05:28:14 compute-0 podman[253924]: 2025-11-29 05:28:14.639492272 +0000 UTC m=+0.125651778 container start 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251125)
Nov 29 05:28:14 compute-0 podman[253924]: nova_compute
Nov 29 05:28:14 compute-0 nova_compute[253939]: + sudo -E kolla_set_configs
Nov 29 05:28:14 compute-0 systemd[1]: Started nova_compute container.
Nov 29 05:28:14 compute-0 sudo[253883]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Validating config file
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying service configuration files
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Deleting /etc/ceph
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Creating directory /etc/ceph
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Writing out command to execute
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 05:28:14 compute-0 nova_compute[253939]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 05:28:14 compute-0 nova_compute[253939]: ++ cat /run_command
Nov 29 05:28:14 compute-0 nova_compute[253939]: + CMD=nova-compute
Nov 29 05:28:14 compute-0 nova_compute[253939]: + ARGS=
Nov 29 05:28:14 compute-0 nova_compute[253939]: + sudo kolla_copy_cacerts
Nov 29 05:28:14 compute-0 nova_compute[253939]: + [[ ! -n '' ]]
Nov 29 05:28:14 compute-0 nova_compute[253939]: + . kolla_extend_start
Nov 29 05:28:14 compute-0 nova_compute[253939]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 05:28:14 compute-0 nova_compute[253939]: Running command: 'nova-compute'
Nov 29 05:28:14 compute-0 nova_compute[253939]: + umask 0022
Nov 29 05:28:14 compute-0 nova_compute[253939]: + exec nova-compute
Nov 29 05:28:14 compute-0 ceph-mon[75176]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:15 compute-0 python3.9[254100]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:28:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:16 compute-0 python3.9[254251]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:28:16 compute-0 ceph-mon[75176]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:16 compute-0 nova_compute[253939]: 2025-11-29 05:28:16.884 253943 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 05:28:16 compute-0 nova_compute[253939]: 2025-11-29 05:28:16.885 253943 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 05:28:16 compute-0 nova_compute[253939]: 2025-11-29 05:28:16.885 253943 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 05:28:16 compute-0 nova_compute[253939]: 2025-11-29 05:28:16.885 253943 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.022 253943 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.044 253943 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.045 253943 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 05:28:17 compute-0 python3.9[254405]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 05:28:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.736 253943 INFO nova.virt.driver [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.880 253943 INFO nova.compute.provider_config [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.901 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.901 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.901 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.901 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.902 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.903 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.904 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.905 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.906 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.907 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.908 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.908 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.908 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.908 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.908 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.909 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.910 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.911 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.912 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.913 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.914 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.915 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.916 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.917 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.918 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.919 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.920 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.921 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.922 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.923 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.924 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.925 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.926 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.927 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.928 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.929 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.929 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.930 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.930 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.931 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.931 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.932 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.932 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.933 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.933 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.934 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.934 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.934 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.935 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.935 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.936 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.936 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.936 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.937 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.937 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.938 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.938 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.939 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.939 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.939 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.940 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.940 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.941 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.941 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.941 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.942 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.942 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.942 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.943 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.943 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.943 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.944 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.944 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.944 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.945 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.945 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.945 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.946 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.946 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.947 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.947 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.948 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.948 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.949 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.949 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.949 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.950 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.950 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.950 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.951 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.951 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.952 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.952 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.952 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.953 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.953 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.954 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.954 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.955 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.955 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.955 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.956 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.956 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.956 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.957 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.957 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.958 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.958 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.958 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.959 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.959 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.959 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.960 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.960 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.960 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.961 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.961 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.962 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.962 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.962 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.963 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.963 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.963 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.964 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.964 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.965 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.965 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.965 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.966 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.966 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.967 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.967 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.968 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.968 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.968 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.969 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.969 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.970 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.970 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.971 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.971 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.971 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.972 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.972 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.972 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.973 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.973 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.974 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.974 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.974 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.975 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.975 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.975 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.976 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.976 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.976 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.977 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.977 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.978 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.978 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.978 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.979 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.979 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.979 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.980 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.980 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.981 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.981 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.981 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.981 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.982 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.982 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.982 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.982 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.982 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.983 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.983 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.983 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.983 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.983 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.984 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.984 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.984 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.984 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.984 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.985 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.985 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.985 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.985 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.985 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.986 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.986 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.986 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.986 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.986 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.987 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.987 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.987 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.987 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.987 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.988 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.988 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.988 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.988 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.989 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.989 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.989 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.989 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.990 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.990 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.990 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.990 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.991 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.991 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.991 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.991 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.991 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.992 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.992 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.992 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.993 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.993 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.993 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.993 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.993 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.994 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.994 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.994 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.994 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.995 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.995 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.995 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.995 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.995 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.996 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.996 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.996 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.996 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.997 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.997 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.997 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.997 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.998 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.998 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.998 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.998 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.998 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.999 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.999 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:17 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.999 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:17.999 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.000 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.000 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.000 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.001 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.001 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.001 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.001 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.001 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.002 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.002 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.002 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.002 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.002 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.003 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.003 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.003 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.003 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.003 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.004 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.004 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.004 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.004 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.005 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.005 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.005 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.005 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.006 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.006 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.006 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.007 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.007 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.007 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.007 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.007 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.008 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.008 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.008 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.008 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.008 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.009 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.009 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.009 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.009 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.009 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.010 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.010 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.010 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.010 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.010 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.011 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.011 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.011 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.011 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.011 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.012 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.012 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.012 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.012 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.013 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.013 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.013 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.013 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.014 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.014 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.014 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.014 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.015 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.015 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.015 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.015 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.015 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.016 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.016 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.016 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.016 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.016 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.017 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.018 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.019 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.020 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 WARNING oslo_config.cfg [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 05:28:18 compute-0 nova_compute[253939]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 05:28:18 compute-0 nova_compute[253939]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 05:28:18 compute-0 nova_compute[253939]: and ``live_migration_inbound_addr`` respectively.
Nov 29 05:28:18 compute-0 nova_compute[253939]: ).  Its value may be silently ignored in the future.
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.021 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.022 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.023 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rbd_secret_uuid        = 93f82912-647c-5e78-b081-707d0a2966d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.024 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.025 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.026 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.027 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.028 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.029 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.030 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.031 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.031 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.031 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.031 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.031 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.032 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.033 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.033 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.033 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.033 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.033 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.034 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.035 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.036 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.037 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.038 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.039 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.040 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.041 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.042 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.043 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.044 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.045 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.046 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.046 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.046 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.046 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.046 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.047 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.048 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.049 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.050 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.050 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.050 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.050 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.050 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.051 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.051 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.051 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.051 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.051 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.052 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.052 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.052 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.052 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.052 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.053 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.053 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.053 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.053 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.054 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.055 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.055 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.055 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.055 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.055 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.056 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.056 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.056 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.056 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.056 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.057 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.058 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.059 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.060 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.061 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.062 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.063 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.064 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.065 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.066 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.067 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.068 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.069 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.070 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.071 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.072 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.073 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.074 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.075 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.076 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.077 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.078 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.078 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.078 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.078 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.078 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.079 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.080 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.081 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.082 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.083 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.084 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.085 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.086 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.087 253943 DEBUG oslo_service.service [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.088 253943 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.103 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.104 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.104 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.104 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 29 05:28:18 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 05:28:18 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.196 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fefaf0e3940> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.200 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fefaf0e3940> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.201 253943 INFO nova.virt.libvirt.driver [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Connection event '1' reason 'None'
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.224 253943 WARNING nova.virt.libvirt.driver [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 29 05:28:18 compute-0 nova_compute[253939]: 2025-11-29 05:28:18.224 253943 DEBUG nova.virt.libvirt.volume.mount [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 29 05:28:18 compute-0 sudo[254607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cktwdxpuvnshoyqusqvlbtgdzsvittvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394097.7712576-1549-202690491591731/AnsiballZ_podman_container.py'
Nov 29 05:28:18 compute-0 sudo[254607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:18 compute-0 python3.9[254609]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 05:28:18 compute-0 ceph-mon[75176]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:18 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:28:18 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:28:18 compute-0 sudo[254607]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.249 253943 INFO nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 05:28:19 compute-0 nova_compute[253939]: 
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <host>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <uuid>60584de4-e080-4148-9fd9-37c7db79f006</uuid>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <cpu>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <arch>x86_64</arch>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model>EPYC-Rome-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <vendor>AMD</vendor>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <microcode version='16777317'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <signature family='23' model='49' stepping='0'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='x2apic'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='tsc-deadline'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='osxsave'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='hypervisor'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='tsc_adjust'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='spec-ctrl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='stibp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='arch-capabilities'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='cmp_legacy'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='topoext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='virt-ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='lbrv'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='tsc-scale'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='vmcb-clean'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='pause-filter'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='pfthreshold'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='svme-addr-chk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='rdctl-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='skip-l1dfl-vmentry'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='mds-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature name='pschange-mc-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <pages unit='KiB' size='4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <pages unit='KiB' size='2048'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <pages unit='KiB' size='1048576'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </cpu>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <power_management>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <suspend_mem/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </power_management>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <iommu support='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <migration_features>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <live/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <uri_transports>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <uri_transport>tcp</uri_transport>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <uri_transport>rdma</uri_transport>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </uri_transports>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </migration_features>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <topology>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <cells num='1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <cell id='0'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:           <memory unit='KiB'>7864320</memory>
Nov 29 05:28:19 compute-0 nova_compute[253939]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 29 05:28:19 compute-0 nova_compute[253939]:           <pages unit='KiB' size='2048'>0</pages>
Nov 29 05:28:19 compute-0 nova_compute[253939]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 29 05:28:19 compute-0 nova_compute[253939]:           <distances>
Nov 29 05:28:19 compute-0 nova_compute[253939]:             <sibling id='0' value='10'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:           </distances>
Nov 29 05:28:19 compute-0 nova_compute[253939]:           <cpus num='8'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:           </cpus>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         </cell>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </cells>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </topology>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <cache>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </cache>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <secmodel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model>selinux</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <doi>0</doi>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </secmodel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <secmodel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model>dac</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <doi>0</doi>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </secmodel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </host>
Nov 29 05:28:19 compute-0 nova_compute[253939]: 
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <guest>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <os_type>hvm</os_type>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <arch name='i686'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <wordsize>32</wordsize>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <domain type='qemu'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <domain type='kvm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </arch>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <features>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <pae/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <nonpae/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <acpi default='on' toggle='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <apic default='on' toggle='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <cpuselection/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <deviceboot/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <disksnapshot default='on' toggle='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <externalSnapshot/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </features>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </guest>
Nov 29 05:28:19 compute-0 nova_compute[253939]: 
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <guest>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <os_type>hvm</os_type>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <arch name='x86_64'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <wordsize>64</wordsize>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <domain type='qemu'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <domain type='kvm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </arch>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <features>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <acpi default='on' toggle='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <apic default='on' toggle='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <cpuselection/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <deviceboot/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <disksnapshot default='on' toggle='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <externalSnapshot/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </features>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </guest>
Nov 29 05:28:19 compute-0 nova_compute[253939]: 
Nov 29 05:28:19 compute-0 nova_compute[253939]: </capabilities>
Nov 29 05:28:19 compute-0 nova_compute[253939]: 
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.260 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.296 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 05:28:19 compute-0 nova_compute[253939]: <domainCapabilities>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <domain>kvm</domain>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <arch>i686</arch>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <vcpu max='4096'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <iothreads supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <os supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <enum name='firmware'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <loader supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>rom</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pflash</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='readonly'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>yes</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>no</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='secure'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>no</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </loader>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </os>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <cpu>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='host-passthrough' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='hostPassthroughMigratable'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>on</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>off</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='maximum' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='maximumMigratable'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>on</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>off</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='host-model' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <vendor>AMD</vendor>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='x2apic'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='hypervisor'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='stibp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='overflow-recov'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='succor'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='lbrv'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc-scale'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='flushbyasid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='pause-filter'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='pfthreshold'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='disable' name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='custom' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Dhyana-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Genoa'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='auto-ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='auto-ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-128'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-256'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-512'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v6'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v7'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='KnightsMill'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512er'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512pf'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='KnightsMill-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512er'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512pf'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G4-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tbm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G5-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tbm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SierraForest'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cmpccxadd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SierraForest-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cmpccxadd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='athlon'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='athlon-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='core2duo'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='core2duo-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='coreduo'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='coreduo-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='n270'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='n270-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='phenom'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='phenom-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </cpu>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <memoryBacking supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <enum name='sourceType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>file</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>anonymous</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>memfd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </memoryBacking>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <devices>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <disk supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='diskDevice'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>disk</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>cdrom</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>floppy</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>lun</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='bus'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>fdc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>scsi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>sata</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-non-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </disk>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <graphics supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vnc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>egl-headless</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dbus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </graphics>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <video supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='modelType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vga</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>cirrus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>none</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>bochs</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>ramfb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </video>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <hostdev supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='mode'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>subsystem</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='startupPolicy'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>default</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>mandatory</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>requisite</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>optional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='subsysType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pci</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>scsi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='capsType'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='pciBackend'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </hostdev>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <rng supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-non-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>random</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>egd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>builtin</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </rng>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <filesystem supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='driverType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>path</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>handle</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtiofs</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </filesystem>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <tpm supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tpm-tis</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tpm-crb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>emulator</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>external</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendVersion'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>2.0</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </tpm>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <redirdev supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='bus'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </redirdev>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <channel supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pty</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>unix</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </channel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <crypto supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>qemu</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>builtin</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </crypto>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <interface supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>default</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>passt</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </interface>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <panic supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>isa</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>hyperv</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </panic>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <console supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>null</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pty</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dev</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>file</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pipe</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>stdio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>udp</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tcp</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>unix</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>qemu-vdagent</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dbus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </console>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </devices>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <features>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <gic supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <vmcoreinfo supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <genid supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <backingStoreInput supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <backup supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <async-teardown supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <ps2 supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <sev supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <sgx supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <hyperv supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='features'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>relaxed</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vapic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>spinlocks</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vpindex</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>runtime</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>synic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>stimer</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>reset</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vendor_id</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>frequencies</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>reenlightenment</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tlbflush</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>ipi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>avic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>emsr_bitmap</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>xmm_input</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <defaults>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <spinlocks>4095</spinlocks>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <stimer_direct>on</stimer_direct>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </defaults>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </hyperv>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <launchSecurity supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='sectype'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tdx</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </launchSecurity>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </features>
Nov 29 05:28:19 compute-0 nova_compute[253939]: </domainCapabilities>
Nov 29 05:28:19 compute-0 nova_compute[253939]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.303 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 05:28:19 compute-0 nova_compute[253939]: <domainCapabilities>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <domain>kvm</domain>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <arch>i686</arch>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <vcpu max='240'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <iothreads supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <os supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <enum name='firmware'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <loader supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>rom</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pflash</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='readonly'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>yes</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>no</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='secure'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>no</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </loader>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </os>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <cpu>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='host-passthrough' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='hostPassthroughMigratable'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>on</value>
Nov 29 05:28:19 compute-0 podman[254691]: 2025-11-29 05:28:19.362003528 +0000 UTC m=+0.087129637 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>off</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='maximum' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='maximumMigratable'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>on</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>off</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='host-model' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <vendor>AMD</vendor>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='x2apic'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='hypervisor'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='stibp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='overflow-recov'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='succor'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='lbrv'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc-scale'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='flushbyasid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='pause-filter'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='pfthreshold'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='disable' name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='custom' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Dhyana-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Genoa'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='auto-ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='auto-ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-128'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-256'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-512'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v6'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v7'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='KnightsMill'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512er'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512pf'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='KnightsMill-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512er'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512pf'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G4-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tbm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G5-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tbm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SierraForest'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cmpccxadd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SierraForest-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cmpccxadd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='athlon'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='athlon-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='core2duo'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='core2duo-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='coreduo'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='coreduo-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='n270'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='n270-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='phenom'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='phenom-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </cpu>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <memoryBacking supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <enum name='sourceType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>file</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>anonymous</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>memfd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </memoryBacking>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <devices>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <disk supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='diskDevice'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>disk</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>cdrom</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>floppy</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>lun</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='bus'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>ide</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>fdc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>scsi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>sata</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-non-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </disk>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <graphics supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vnc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>egl-headless</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dbus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </graphics>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <video supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='modelType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vga</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>cirrus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>none</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>bochs</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>ramfb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </video>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <hostdev supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='mode'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>subsystem</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='startupPolicy'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>default</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>mandatory</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>requisite</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>optional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='subsysType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pci</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>scsi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='capsType'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='pciBackend'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </hostdev>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <rng supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-non-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>random</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>egd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>builtin</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </rng>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <filesystem supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='driverType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>path</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>handle</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtiofs</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </filesystem>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <tpm supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tpm-tis</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tpm-crb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>emulator</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>external</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendVersion'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>2.0</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </tpm>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <redirdev supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='bus'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </redirdev>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <channel supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pty</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>unix</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </channel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <crypto supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>qemu</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>builtin</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </crypto>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <interface supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>default</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>passt</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </interface>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <panic supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>isa</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>hyperv</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </panic>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <console supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>null</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pty</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dev</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>file</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pipe</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>stdio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>udp</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tcp</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>unix</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>qemu-vdagent</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dbus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </console>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </devices>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <features>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <gic supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <vmcoreinfo supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <genid supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <backingStoreInput supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <backup supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <async-teardown supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <ps2 supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <sev supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <sgx supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <hyperv supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='features'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>relaxed</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vapic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>spinlocks</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vpindex</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>runtime</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>synic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>stimer</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>reset</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vendor_id</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>frequencies</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>reenlightenment</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tlbflush</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>ipi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>avic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>emsr_bitmap</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>xmm_input</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <defaults>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <spinlocks>4095</spinlocks>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <stimer_direct>on</stimer_direct>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </defaults>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </hyperv>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <launchSecurity supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='sectype'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tdx</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </launchSecurity>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </features>
Nov 29 05:28:19 compute-0 nova_compute[253939]: </domainCapabilities>
Nov 29 05:28:19 compute-0 nova_compute[253939]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.349 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.356 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 05:28:19 compute-0 nova_compute[253939]: <domainCapabilities>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <domain>kvm</domain>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <arch>x86_64</arch>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <vcpu max='4096'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <iothreads supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <os supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <enum name='firmware'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>efi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <loader supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>rom</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pflash</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='readonly'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>yes</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>no</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='secure'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>yes</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>no</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </loader>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </os>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <cpu>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='host-passthrough' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='hostPassthroughMigratable'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>on</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>off</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='maximum' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='maximumMigratable'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>on</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>off</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='host-model' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <vendor>AMD</vendor>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='x2apic'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='hypervisor'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='stibp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='overflow-recov'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='succor'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='lbrv'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc-scale'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='flushbyasid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='pause-filter'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='pfthreshold'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='disable' name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='custom' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Dhyana-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Genoa'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='auto-ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='auto-ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-128'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-256'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-512'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v6'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v7'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='KnightsMill'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512er'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512pf'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='KnightsMill-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512er'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512pf'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G4-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tbm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G5-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tbm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SierraForest'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cmpccxadd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SierraForest-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cmpccxadd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='athlon'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='athlon-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='core2duo'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='core2duo-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='coreduo'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='coreduo-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='n270'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='n270-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='phenom'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='phenom-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </cpu>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <memoryBacking supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <enum name='sourceType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>file</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>anonymous</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>memfd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </memoryBacking>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <devices>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <disk supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='diskDevice'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>disk</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>cdrom</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>floppy</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>lun</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='bus'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>fdc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>scsi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>sata</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-non-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </disk>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <graphics supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vnc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>egl-headless</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dbus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </graphics>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <video supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='modelType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vga</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>cirrus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>none</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>bochs</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>ramfb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </video>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <hostdev supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='mode'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>subsystem</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='startupPolicy'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>default</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>mandatory</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>requisite</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>optional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='subsysType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pci</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>scsi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='capsType'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='pciBackend'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </hostdev>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <rng supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-non-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>random</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>egd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>builtin</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </rng>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <filesystem supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='driverType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>path</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>handle</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtiofs</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </filesystem>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <tpm supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tpm-tis</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tpm-crb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>emulator</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>external</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendVersion'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>2.0</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </tpm>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <redirdev supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='bus'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </redirdev>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <channel supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pty</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>unix</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </channel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <crypto supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>qemu</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>builtin</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </crypto>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <interface supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>default</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>passt</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </interface>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <panic supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>isa</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>hyperv</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </panic>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <console supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>null</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pty</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dev</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>file</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pipe</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>stdio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>udp</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tcp</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>unix</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>qemu-vdagent</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dbus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </console>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </devices>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <features>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <gic supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <vmcoreinfo supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <genid supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <backingStoreInput supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <backup supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <async-teardown supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <ps2 supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <sev supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <sgx supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <hyperv supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='features'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>relaxed</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vapic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>spinlocks</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vpindex</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>runtime</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>synic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>stimer</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>reset</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vendor_id</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>frequencies</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>reenlightenment</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tlbflush</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>ipi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>avic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>emsr_bitmap</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>xmm_input</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <defaults>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <spinlocks>4095</spinlocks>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <stimer_direct>on</stimer_direct>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </defaults>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </hyperv>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <launchSecurity supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='sectype'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tdx</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </launchSecurity>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </features>
Nov 29 05:28:19 compute-0 nova_compute[253939]: </domainCapabilities>
Nov 29 05:28:19 compute-0 nova_compute[253939]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.414 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 05:28:19 compute-0 nova_compute[253939]: <domainCapabilities>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <domain>kvm</domain>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <arch>x86_64</arch>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <vcpu max='240'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <iothreads supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <os supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <enum name='firmware'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <loader supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>rom</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pflash</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='readonly'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>yes</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>no</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='secure'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>no</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </loader>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </os>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <cpu>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='host-passthrough' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='hostPassthroughMigratable'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>on</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>off</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='maximum' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='maximumMigratable'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>on</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>off</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='host-model' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <vendor>AMD</vendor>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='x2apic'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='hypervisor'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='stibp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='overflow-recov'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='succor'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='lbrv'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='tsc-scale'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='flushbyasid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='pause-filter'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='pfthreshold'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <feature policy='disable' name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <mode name='custom' supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Broadwell-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Cooperlake-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Denverton-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Dhyana-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Genoa'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='auto-ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='auto-ibrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Milan-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amd-psfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='stibp-always-on'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-Rome-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='EPYC-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='GraniteRapids-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-128'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-256'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx10-512'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='prefetchiti'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Haswell-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v6'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Icelake-Server-v7'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='IvyBridge-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='KnightsMill'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512er'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512pf'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='KnightsMill-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512er'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512pf'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G4-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tbm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Opteron_G5-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fma4'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tbm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xop'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SapphireRapids-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='amx-tile'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-bf16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-fp16'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bitalg'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrc'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fzrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='la57'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='taa-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xfd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SierraForest'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cmpccxadd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='SierraForest-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ifma'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cmpccxadd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fbsdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='fsrs'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ibrs-all'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mcdt-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pbrsb-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='psdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='serialize'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vaes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Client-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='hle'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='rtm'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Skylake-Server-v5'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512bw'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512cd'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512dq'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512f'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='avx512vl'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='invpcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pcid'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='pku'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='mpx'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v2'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v3'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='core-capability'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='split-lock-detect'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='Snowridge-v4'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='cldemote'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='erms'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='gfni'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdir64b'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='movdiri'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='xsaves'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='athlon'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='athlon-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='core2duo'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='core2duo-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='coreduo'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='coreduo-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='n270'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='n270-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='ss'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='phenom'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <blockers model='phenom-v1'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnow'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <feature name='3dnowext'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </blockers>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </mode>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </cpu>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <memoryBacking supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <enum name='sourceType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>file</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>anonymous</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <value>memfd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </memoryBacking>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <devices>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <disk supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='diskDevice'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>disk</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>cdrom</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>floppy</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>lun</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='bus'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>ide</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>fdc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>scsi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>sata</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-non-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </disk>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <graphics supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vnc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>egl-headless</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dbus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </graphics>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <video supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='modelType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vga</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>cirrus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>none</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>bochs</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>ramfb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </video>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <hostdev supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='mode'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>subsystem</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='startupPolicy'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>default</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>mandatory</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>requisite</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>optional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='subsysType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pci</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>scsi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='capsType'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='pciBackend'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </hostdev>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <rng supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtio-non-transitional</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>random</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>egd</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>builtin</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </rng>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <filesystem supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='driverType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>path</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>handle</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>virtiofs</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </filesystem>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <tpm supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tpm-tis</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tpm-crb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>emulator</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>external</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendVersion'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>2.0</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </tpm>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <redirdev supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='bus'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>usb</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </redirdev>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <channel supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pty</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>unix</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </channel>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <crypto supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>qemu</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendModel'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>builtin</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </crypto>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <interface supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='backendType'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>default</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>passt</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </interface>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <panic supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='model'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>isa</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>hyperv</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </panic>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <console supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='type'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>null</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vc</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pty</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dev</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>file</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>pipe</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>stdio</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>udp</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tcp</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>unix</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>qemu-vdagent</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>dbus</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </console>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </devices>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   <features>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <gic supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <vmcoreinfo supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <genid supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <backingStoreInput supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <backup supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <async-teardown supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <ps2 supported='yes'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <sev supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <sgx supported='no'/>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <hyperv supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='features'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>relaxed</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vapic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>spinlocks</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vpindex</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>runtime</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>synic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>stimer</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>reset</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>vendor_id</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>frequencies</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>reenlightenment</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tlbflush</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>ipi</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>avic</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>emsr_bitmap</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>xmm_input</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <defaults>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <spinlocks>4095</spinlocks>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <stimer_direct>on</stimer_direct>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </defaults>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </hyperv>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     <launchSecurity supported='yes'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       <enum name='sectype'>
Nov 29 05:28:19 compute-0 nova_compute[253939]:         <value>tdx</value>
Nov 29 05:28:19 compute-0 nova_compute[253939]:       </enum>
Nov 29 05:28:19 compute-0 nova_compute[253939]:     </launchSecurity>
Nov 29 05:28:19 compute-0 nova_compute[253939]:   </features>
Nov 29 05:28:19 compute-0 nova_compute[253939]: </domainCapabilities>
Nov 29 05:28:19 compute-0 nova_compute[253939]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.473 253943 DEBUG nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.474 253943 INFO nova.virt.libvirt.host [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Secure Boot support detected
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.476 253943 INFO nova.virt.libvirt.driver [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.477 253943 INFO nova.virt.libvirt.driver [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.493 253943 DEBUG nova.virt.libvirt.driver [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.555 253943 INFO nova.virt.node [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Determined node identity 59594bc8-0143-475b-913f-cbe106b48966 from /var/lib/nova/compute_id
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.589 253943 WARNING nova.compute.manager [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Compute nodes ['59594bc8-0143-475b-913f-cbe106b48966'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.632 253943 INFO nova.compute.manager [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 29 05:28:19 compute-0 sudo[254812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bocgauoxbamqdeibekxhjjzukmpwoodc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394099.2363493-1557-227557640959315/AnsiballZ_systemd.py'
Nov 29 05:28:19 compute-0 sudo[254812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.669 253943 WARNING nova.compute.manager [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.669 253943 DEBUG oslo_concurrency.lockutils [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.669 253943 DEBUG oslo_concurrency.lockutils [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.669 253943 DEBUG oslo_concurrency.lockutils [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.670 253943 DEBUG nova.compute.resource_tracker [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:28:19 compute-0 nova_compute[253939]: 2025-11-29 05:28:19.670 253943 DEBUG oslo_concurrency.processutils [None req-430e5848-425a-467c-aca3-25ed9a713d97 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:28:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:19 compute-0 python3.9[254814]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 05:28:20 compute-0 systemd[1]: Stopping nova_compute container...
Nov 29 05:28:20 compute-0 nova_compute[253939]: 2025-11-29 05:28:20.103 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 05:28:20 compute-0 nova_compute[253939]: 2025-11-29 05:28:20.103 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 05:28:20 compute-0 nova_compute[253939]: 2025-11-29 05:28:20.103 253943 DEBUG oslo_concurrency.lockutils [None req-67b932a9-d47a-45d1-9b97-fe4b5ad084d2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 05:28:20 compute-0 virtqemud[254503]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 05:28:20 compute-0 virtqemud[254503]: hostname: compute-0
Nov 29 05:28:20 compute-0 virtqemud[254503]: End of file while reading data: Input/output error
Nov 29 05:28:20 compute-0 systemd[1]: libpod-6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016.scope: Deactivated successfully.
Nov 29 05:28:20 compute-0 systemd[1]: libpod-6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016.scope: Consumed 3.589s CPU time.
Nov 29 05:28:20 compute-0 podman[254838]: 2025-11-29 05:28:20.478535641 +0000 UTC m=+0.440564732 container died 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016-userdata-shm.mount: Deactivated successfully.
Nov 29 05:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2-merged.mount: Deactivated successfully.
Nov 29 05:28:21 compute-0 sshd-session[254610]: Received disconnect from 120.48.175.69 port 54352:11: Bye Bye [preauth]
Nov 29 05:28:21 compute-0 sshd-session[254610]: Disconnected from authenticating user root 120.48.175.69 port 54352 [preauth]
Nov 29 05:28:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:22 compute-0 ceph-mon[75176]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:22 compute-0 podman[254838]: 2025-11-29 05:28:22.408832305 +0000 UTC m=+2.370861396 container cleanup 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:28:22 compute-0 podman[254838]: nova_compute
Nov 29 05:28:22 compute-0 podman[254868]: nova_compute
Nov 29 05:28:22 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 05:28:22 compute-0 systemd[1]: Stopped nova_compute container.
Nov 29 05:28:22 compute-0 systemd[1]: Starting nova_compute container...
Nov 29 05:28:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:28:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd269473899572ff98c1f1603823bf00b0a3188db118f457f63a154c6cdb39f2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:22 compute-0 podman[254882]: 2025-11-29 05:28:22.67914701 +0000 UTC m=+0.133575850 container init 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:28:22 compute-0 podman[254882]: 2025-11-29 05:28:22.689215834 +0000 UTC m=+0.143644634 container start 6566bb73024cf8eec0d19b2b47f0a23923c7a75f53810aa1c5376385faa47016 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute)
Nov 29 05:28:22 compute-0 podman[254882]: nova_compute
Nov 29 05:28:22 compute-0 nova_compute[254898]: + sudo -E kolla_set_configs
Nov 29 05:28:22 compute-0 systemd[1]: Started nova_compute container.
Nov 29 05:28:22 compute-0 sudo[254812]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Validating config file
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying service configuration files
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Deleting /etc/ceph
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Creating directory /etc/ceph
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Writing out command to execute
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 05:28:22 compute-0 nova_compute[254898]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 05:28:22 compute-0 nova_compute[254898]: ++ cat /run_command
Nov 29 05:28:22 compute-0 nova_compute[254898]: + CMD=nova-compute
Nov 29 05:28:22 compute-0 nova_compute[254898]: + ARGS=
Nov 29 05:28:22 compute-0 nova_compute[254898]: + sudo kolla_copy_cacerts
Nov 29 05:28:22 compute-0 nova_compute[254898]: + [[ ! -n '' ]]
Nov 29 05:28:22 compute-0 nova_compute[254898]: + . kolla_extend_start
Nov 29 05:28:22 compute-0 nova_compute[254898]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 05:28:22 compute-0 nova_compute[254898]: Running command: 'nova-compute'
Nov 29 05:28:22 compute-0 nova_compute[254898]: + umask 0022
Nov 29 05:28:22 compute-0 nova_compute[254898]: + exec nova-compute
Nov 29 05:28:23 compute-0 ceph-mon[75176]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:23 compute-0 sudo[255059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krtxcycybiqdlhmobaryilykecohwopm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764394103.0210185-1566-221041919148711/AnsiballZ_podman_container.py'
Nov 29 05:28:23 compute-0 sudo[255059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:28:23 compute-0 python3.9[255061]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 05:28:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:23 compute-0 systemd[1]: Started libpod-conmon-8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd.scope.
Nov 29 05:28:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d510b9c85f95babbffbbd9329c549518a622d7206d933d58c2dde118ecee270/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d510b9c85f95babbffbbd9329c549518a622d7206d933d58c2dde118ecee270/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d510b9c85f95babbffbbd9329c549518a622d7206d933d58c2dde118ecee270/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:23 compute-0 podman[255087]: 2025-11-29 05:28:23.998005453 +0000 UTC m=+0.153215485 container init 8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Nov 29 05:28:24 compute-0 podman[255087]: 2025-11-29 05:28:24.006772776 +0000 UTC m=+0.161982768 container start 8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init)
Nov 29 05:28:24 compute-0 python3.9[255061]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 05:28:24 compute-0 nova_compute_init[255109]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 05:28:24 compute-0 systemd[1]: libpod-8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd.scope: Deactivated successfully.
Nov 29 05:28:24 compute-0 podman[255124]: 2025-11-29 05:28:24.140530849 +0000 UTC m=+0.030783015 container died 8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init)
Nov 29 05:28:24 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd-userdata-shm.mount: Deactivated successfully.
Nov 29 05:28:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d510b9c85f95babbffbbd9329c549518a622d7206d933d58c2dde118ecee270-merged.mount: Deactivated successfully.
Nov 29 05:28:24 compute-0 podman[255124]: 2025-11-29 05:28:24.179950322 +0000 UTC m=+0.070202458 container cleanup 8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3)
Nov 29 05:28:24 compute-0 systemd[1]: libpod-conmon-8bd1d6b6938bd9a6ec2331a82e9f3112ec96a19040d1ababb25f1ca4f1e4d7dd.scope: Deactivated successfully.
Nov 29 05:28:24 compute-0 sudo[255059]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:24 compute-0 ceph-mon[75176]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:24 compute-0 nova_compute[254898]: 2025-11-29 05:28:24.788 254902 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 05:28:24 compute-0 nova_compute[254898]: 2025-11-29 05:28:24.789 254902 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 05:28:24 compute-0 nova_compute[254898]: 2025-11-29 05:28:24.789 254902 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 29 05:28:24 compute-0 nova_compute[254898]: 2025-11-29 05:28:24.789 254902 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 29 05:28:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:24 compute-0 sshd-session[223949]: Connection closed by 192.168.122.30 port 34648
Nov 29 05:28:24 compute-0 sshd-session[223946]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:28:24 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 05:28:24 compute-0 systemd[1]: session-49.scope: Consumed 2min 36.249s CPU time.
Nov 29 05:28:24 compute-0 systemd-logind[793]: Session 49 logged out. Waiting for processes to exit.
Nov 29 05:28:24 compute-0 systemd-logind[793]: Removed session 49.
Nov 29 05:28:24 compute-0 nova_compute[254898]: 2025-11-29 05:28:24.918 254902 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:28:24 compute-0 nova_compute[254898]: 2025-11-29 05:28:24.944 254902 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:28:24 compute-0 nova_compute[254898]: 2025-11-29 05:28:24.944 254902 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.415 254902 INFO nova.virt.driver [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.539 254902 INFO nova.compute.provider_config [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.555 254902 DEBUG oslo_concurrency.lockutils [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.555 254902 DEBUG oslo_concurrency.lockutils [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.555 254902 DEBUG oslo_concurrency.lockutils [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.556 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.556 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.556 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.556 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.557 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.557 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.557 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.557 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.557 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.558 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.558 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.558 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.558 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.558 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.559 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.559 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.559 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.559 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.559 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.560 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.560 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.560 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.560 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.560 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.561 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.561 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.561 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.561 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.561 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.562 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.562 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.562 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.562 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.562 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.563 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.563 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.563 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.563 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.564 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.564 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.564 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.564 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.565 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.565 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.565 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.565 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.565 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.566 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.566 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.566 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.566 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.566 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.567 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.567 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.567 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.567 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.567 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.568 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.568 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.568 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.568 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.568 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.569 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.570 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.570 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.570 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.570 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.570 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.571 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.571 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.571 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.571 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.571 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.572 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.572 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.572 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.572 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.572 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.573 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.573 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.573 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.573 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.573 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.574 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.574 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.574 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.574 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.574 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.575 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.576 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.576 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.576 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.576 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.576 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.577 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.577 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.577 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.577 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.577 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.578 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.578 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.578 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.578 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.578 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.579 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.580 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.581 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.582 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.583 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.584 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.585 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.586 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.587 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.588 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.589 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.590 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.591 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.592 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.593 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.594 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.595 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.596 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.597 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.598 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.599 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.600 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.601 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.602 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.603 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.604 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.605 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.606 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.607 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.608 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.609 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.610 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.611 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.612 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.613 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.614 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.615 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.616 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.617 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.618 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.619 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.620 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.621 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.622 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.623 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.624 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.625 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.626 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.627 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.628 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.629 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.630 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.631 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 WARNING oslo_config.cfg [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 05:28:25 compute-0 nova_compute[254898]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 05:28:25 compute-0 nova_compute[254898]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 05:28:25 compute-0 nova_compute[254898]: and ``live_migration_inbound_addr`` respectively.
Nov 29 05:28:25 compute-0 nova_compute[254898]: ).  Its value may be silently ignored in the future.
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.632 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.633 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.634 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rbd_secret_uuid        = 93f82912-647c-5e78-b081-707d0a2966d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.635 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.636 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.637 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.638 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.639 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.640 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.641 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.642 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.643 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.643 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.643 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.643 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.643 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.644 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.645 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.646 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.647 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.648 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.649 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.650 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.651 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.652 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.653 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.654 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.655 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.656 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.657 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.658 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.659 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.660 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.661 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.662 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.663 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.664 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.665 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.666 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.667 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.668 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.669 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.670 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.671 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.672 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.673 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.674 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.675 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.676 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.677 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.678 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.679 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.680 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.681 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.682 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.683 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.684 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.685 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.686 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.687 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.688 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.689 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.690 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.691 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.692 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.693 254902 DEBUG oslo_service.service [None req-2b7f3d31-ec62-49d9-9914-7e797262dc73 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.694 254902 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.721 254902 INFO nova.virt.node [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Determined node identity 59594bc8-0143-475b-913f-cbe106b48966 from /var/lib/nova/compute_id
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.721 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.722 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.723 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.723 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 29 05:28:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.735 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7feb889764c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.738 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7feb889764c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.740 254902 INFO nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Connection event '1' reason 'None'
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.746 254902 INFO nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 05:28:25 compute-0 nova_compute[254898]: 
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <host>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <uuid>60584de4-e080-4148-9fd9-37c7db79f006</uuid>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <cpu>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <arch>x86_64</arch>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model>EPYC-Rome-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <vendor>AMD</vendor>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <microcode version='16777317'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <signature family='23' model='49' stepping='0'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='x2apic'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='tsc-deadline'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='osxsave'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='hypervisor'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='tsc_adjust'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='spec-ctrl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='stibp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='arch-capabilities'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='cmp_legacy'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='topoext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='virt-ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='lbrv'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='tsc-scale'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='vmcb-clean'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='pause-filter'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='pfthreshold'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='svme-addr-chk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='rdctl-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='skip-l1dfl-vmentry'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='mds-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature name='pschange-mc-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <pages unit='KiB' size='4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <pages unit='KiB' size='2048'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <pages unit='KiB' size='1048576'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </cpu>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <power_management>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <suspend_mem/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </power_management>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <iommu support='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <migration_features>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <live/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <uri_transports>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <uri_transport>tcp</uri_transport>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <uri_transport>rdma</uri_transport>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </uri_transports>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </migration_features>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <topology>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <cells num='1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <cell id='0'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:           <memory unit='KiB'>7864320</memory>
Nov 29 05:28:25 compute-0 nova_compute[254898]:           <pages unit='KiB' size='4'>1966080</pages>
Nov 29 05:28:25 compute-0 nova_compute[254898]:           <pages unit='KiB' size='2048'>0</pages>
Nov 29 05:28:25 compute-0 nova_compute[254898]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 29 05:28:25 compute-0 nova_compute[254898]:           <distances>
Nov 29 05:28:25 compute-0 nova_compute[254898]:             <sibling id='0' value='10'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:           </distances>
Nov 29 05:28:25 compute-0 nova_compute[254898]:           <cpus num='8'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:           </cpus>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         </cell>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </cells>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </topology>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <cache>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </cache>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <secmodel>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model>selinux</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <doi>0</doi>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </secmodel>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <secmodel>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model>dac</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <doi>0</doi>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </secmodel>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </host>
Nov 29 05:28:25 compute-0 nova_compute[254898]: 
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <guest>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <os_type>hvm</os_type>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <arch name='i686'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <wordsize>32</wordsize>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <domain type='qemu'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <domain type='kvm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </arch>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <features>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <pae/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <nonpae/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <acpi default='on' toggle='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <apic default='on' toggle='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <cpuselection/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <deviceboot/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <disksnapshot default='on' toggle='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <externalSnapshot/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </features>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </guest>
Nov 29 05:28:25 compute-0 nova_compute[254898]: 
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <guest>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <os_type>hvm</os_type>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <arch name='x86_64'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <wordsize>64</wordsize>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <domain type='qemu'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <domain type='kvm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </arch>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <features>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <acpi default='on' toggle='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <apic default='on' toggle='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <cpuselection/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <deviceboot/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <disksnapshot default='on' toggle='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <externalSnapshot/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </features>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </guest>
Nov 29 05:28:25 compute-0 nova_compute[254898]: 
Nov 29 05:28:25 compute-0 nova_compute[254898]: </capabilities>
Nov 29 05:28:25 compute-0 nova_compute[254898]: 
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.752 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.754 254902 DEBUG nova.virt.libvirt.volume.mount [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.757 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 05:28:25 compute-0 nova_compute[254898]: <domainCapabilities>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <domain>kvm</domain>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <arch>i686</arch>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <vcpu max='240'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <iothreads supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <os supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <enum name='firmware'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <loader supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>rom</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pflash</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='readonly'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>yes</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>no</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='secure'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>no</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </loader>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </os>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <cpu>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='host-passthrough' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='hostPassthroughMigratable'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>on</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>off</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='maximum' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='maximumMigratable'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>on</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>off</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='host-model' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <vendor>AMD</vendor>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='x2apic'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='hypervisor'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='stibp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='overflow-recov'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='succor'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='lbrv'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc-scale'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='flushbyasid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='pause-filter'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='pfthreshold'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='disable' name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='custom' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Dhyana-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Genoa'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='auto-ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='auto-ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-128'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-256'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-512'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v6'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v7'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='KnightsMill'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512er'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512pf'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='KnightsMill-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512er'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512pf'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G4-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tbm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G5-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tbm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SierraForest'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cmpccxadd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SierraForest-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cmpccxadd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='athlon'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='athlon-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='core2duo'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='core2duo-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='coreduo'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='coreduo-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='n270'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='n270-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='phenom'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='phenom-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </cpu>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <memoryBacking supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <enum name='sourceType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>file</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>anonymous</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>memfd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </memoryBacking>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <devices>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <disk supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='diskDevice'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>disk</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>cdrom</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>floppy</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>lun</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='bus'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>ide</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>fdc</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>scsi</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>sata</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-non-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </disk>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <graphics supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vnc</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>egl-headless</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>dbus</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </graphics>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <video supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='modelType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vga</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>cirrus</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>none</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>bochs</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>ramfb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </video>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <hostdev supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='mode'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>subsystem</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='startupPolicy'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>default</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>mandatory</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>requisite</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>optional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='subsysType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pci</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>scsi</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='capsType'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='pciBackend'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </hostdev>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <rng supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-non-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>random</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>egd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>builtin</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </rng>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <filesystem supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='driverType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>path</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>handle</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtiofs</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </filesystem>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <tpm supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tpm-tis</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tpm-crb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>emulator</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>external</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendVersion'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>2.0</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </tpm>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <redirdev supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='bus'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </redirdev>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <channel supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pty</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>unix</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </channel>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <crypto supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>qemu</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>builtin</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </crypto>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <interface supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>default</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>passt</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </interface>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <panic supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>isa</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>hyperv</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </panic>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <console supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>null</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vc</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pty</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>dev</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>file</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pipe</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>stdio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>udp</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tcp</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>unix</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>qemu-vdagent</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>dbus</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </console>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </devices>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <features>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <gic supported='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <vmcoreinfo supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <genid supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <backingStoreInput supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <backup supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <async-teardown supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <ps2 supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <sev supported='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <sgx supported='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <hyperv supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='features'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>relaxed</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vapic</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>spinlocks</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vpindex</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>runtime</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>synic</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>stimer</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>reset</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vendor_id</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>frequencies</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>reenlightenment</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tlbflush</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>ipi</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>avic</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>emsr_bitmap</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>xmm_input</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <defaults>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <spinlocks>4095</spinlocks>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <stimer_direct>on</stimer_direct>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </defaults>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </hyperv>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <launchSecurity supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='sectype'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tdx</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </launchSecurity>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </features>
Nov 29 05:28:25 compute-0 nova_compute[254898]: </domainCapabilities>
Nov 29 05:28:25 compute-0 nova_compute[254898]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.765 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 05:28:25 compute-0 nova_compute[254898]: <domainCapabilities>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <domain>kvm</domain>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <arch>i686</arch>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <vcpu max='4096'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <iothreads supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <os supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <enum name='firmware'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <loader supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>rom</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pflash</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='readonly'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>yes</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>no</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='secure'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>no</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </loader>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </os>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <cpu>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='host-passthrough' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='hostPassthroughMigratable'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>on</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>off</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='maximum' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='maximumMigratable'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>on</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>off</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='host-model' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <vendor>AMD</vendor>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='x2apic'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='hypervisor'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='stibp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='overflow-recov'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='succor'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='lbrv'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc-scale'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='flushbyasid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='pause-filter'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='pfthreshold'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='disable' name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='custom' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Dhyana-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Genoa'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='auto-ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='auto-ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-128'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-256'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-512'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v6'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v7'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='KnightsMill'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512er'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512pf'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='KnightsMill-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512er'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512pf'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G4-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tbm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G5-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tbm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SierraForest'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cmpccxadd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SierraForest-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cmpccxadd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='athlon'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='athlon-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='core2duo'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='core2duo-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='coreduo'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='coreduo-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='n270'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='n270-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='phenom'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='phenom-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </cpu>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <memoryBacking supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <enum name='sourceType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>file</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>anonymous</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>memfd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </memoryBacking>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <devices>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <disk supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='diskDevice'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>disk</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>cdrom</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>floppy</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>lun</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='bus'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>fdc</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>scsi</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>sata</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-non-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </disk>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <graphics supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vnc</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>egl-headless</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>dbus</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </graphics>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <video supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='modelType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vga</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>cirrus</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>none</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>bochs</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>ramfb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </video>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <hostdev supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='mode'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>subsystem</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='startupPolicy'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>default</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>mandatory</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>requisite</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>optional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='subsysType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pci</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>scsi</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='capsType'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='pciBackend'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </hostdev>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <rng supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-non-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>random</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>egd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>builtin</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </rng>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <filesystem supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='driverType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>path</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>handle</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtiofs</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </filesystem>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <tpm supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tpm-tis</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tpm-crb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>emulator</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>external</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendVersion'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>2.0</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </tpm>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <redirdev supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='bus'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </redirdev>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <channel supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pty</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>unix</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </channel>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <crypto supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>qemu</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>builtin</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </crypto>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <interface supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>default</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>passt</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </interface>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <panic supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>isa</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>hyperv</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </panic>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <console supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>null</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vc</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pty</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>dev</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>file</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pipe</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>stdio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>udp</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tcp</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>unix</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>qemu-vdagent</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>dbus</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </console>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </devices>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <features>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <gic supported='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <vmcoreinfo supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <genid supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <backingStoreInput supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <backup supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <async-teardown supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <ps2 supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <sev supported='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <sgx supported='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <hyperv supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='features'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>relaxed</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vapic</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>spinlocks</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vpindex</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>runtime</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>synic</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>stimer</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>reset</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vendor_id</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>frequencies</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>reenlightenment</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tlbflush</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>ipi</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>avic</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>emsr_bitmap</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>xmm_input</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <defaults>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <spinlocks>4095</spinlocks>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <stimer_direct>on</stimer_direct>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </defaults>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </hyperv>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <launchSecurity supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='sectype'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tdx</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </launchSecurity>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </features>
Nov 29 05:28:25 compute-0 nova_compute[254898]: </domainCapabilities>
Nov 29 05:28:25 compute-0 nova_compute[254898]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.803 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.808 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 05:28:25 compute-0 nova_compute[254898]: <domainCapabilities>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <domain>kvm</domain>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <arch>x86_64</arch>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <vcpu max='240'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <iothreads supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <os supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <enum name='firmware'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <loader supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>rom</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pflash</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='readonly'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>yes</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>no</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='secure'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>no</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </loader>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </os>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <cpu>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='host-passthrough' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='hostPassthroughMigratable'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>on</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>off</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='maximum' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='maximumMigratable'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>on</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>off</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='host-model' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <vendor>AMD</vendor>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='x2apic'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='hypervisor'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='stibp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='overflow-recov'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='succor'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='lbrv'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc-scale'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='flushbyasid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='pause-filter'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='pfthreshold'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='disable' name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='custom' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Dhyana-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Genoa'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='auto-ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='auto-ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-128'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-256'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-512'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v6'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v7'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='KnightsMill'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512er'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512pf'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='KnightsMill-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512er'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512pf'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G4-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tbm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G5-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tbm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SierraForest'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cmpccxadd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SierraForest-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cmpccxadd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='athlon'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='athlon-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='core2duo'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='core2duo-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='coreduo'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='coreduo-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='n270'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='n270-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='phenom'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='phenom-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </cpu>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <memoryBacking supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <enum name='sourceType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>file</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>anonymous</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>memfd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </memoryBacking>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <devices>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <disk supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='diskDevice'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>disk</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>cdrom</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>floppy</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>lun</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='bus'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>ide</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>fdc</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>scsi</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>sata</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-non-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </disk>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <graphics supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vnc</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>egl-headless</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>dbus</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </graphics>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <video supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='modelType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vga</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>cirrus</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>none</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>bochs</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>ramfb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </video>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <hostdev supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='mode'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>subsystem</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='startupPolicy'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>default</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>mandatory</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>requisite</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>optional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='subsysType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pci</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>scsi</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='capsType'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='pciBackend'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </hostdev>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <rng supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtio-non-transitional</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>random</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>egd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>builtin</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </rng>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <filesystem supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='driverType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>path</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>handle</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>virtiofs</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </filesystem>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <tpm supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tpm-tis</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tpm-crb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>emulator</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>external</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendVersion'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>2.0</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </tpm>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <redirdev supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='bus'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </redirdev>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <channel supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pty</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>unix</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </channel>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <crypto supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>qemu</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>builtin</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </crypto>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <interface supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='backendType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>default</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>passt</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </interface>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <panic supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>isa</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>hyperv</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </panic>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <console supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>null</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vc</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pty</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>dev</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>file</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pipe</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>stdio</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>udp</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tcp</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>unix</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>qemu-vdagent</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>dbus</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </console>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </devices>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <features>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <gic supported='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <vmcoreinfo supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <genid supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <backingStoreInput supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <backup supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <async-teardown supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <ps2 supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <sev supported='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <sgx supported='no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <hyperv supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='features'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>relaxed</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vapic</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>spinlocks</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vpindex</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>runtime</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>synic</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>stimer</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>reset</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>vendor_id</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>frequencies</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>reenlightenment</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tlbflush</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>ipi</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>avic</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>emsr_bitmap</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>xmm_input</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <defaults>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <spinlocks>4095</spinlocks>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <stimer_direct>on</stimer_direct>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </defaults>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </hyperv>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <launchSecurity supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='sectype'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>tdx</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </launchSecurity>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </features>
Nov 29 05:28:25 compute-0 nova_compute[254898]: </domainCapabilities>
Nov 29 05:28:25 compute-0 nova_compute[254898]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 05:28:25 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.871 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 05:28:25 compute-0 nova_compute[254898]: <domainCapabilities>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <path>/usr/libexec/qemu-kvm</path>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <domain>kvm</domain>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <arch>x86_64</arch>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <vcpu max='4096'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <iothreads supported='yes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <os supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <enum name='firmware'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>efi</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <loader supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>rom</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>pflash</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='readonly'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>yes</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>no</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='secure'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>yes</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>no</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </loader>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </os>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <cpu>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='host-passthrough' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='hostPassthroughMigratable'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>on</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>off</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='maximum' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <enum name='maximumMigratable'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>on</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <value>off</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='host-model' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <vendor>AMD</vendor>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='x2apic'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc-deadline'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='hypervisor'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc_adjust'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='spec-ctrl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='stibp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='cmp_legacy'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='overflow-recov'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='succor'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='amd-ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='virt-ssbd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='lbrv'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='tsc-scale'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='vmcb-clean'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='flushbyasid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='pause-filter'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='pfthreshold'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='svme-addr-chk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <feature policy='disable' name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <mode name='custom' supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Broadwell-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cascadelake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Cooperlake-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Denverton-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Dhyana-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Genoa'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='auto-ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Genoa-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='auto-ibrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Milan-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amd-psfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='no-nested-data-bp'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='null-sel-clr-base'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='stibp-always-on'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-Rome-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='EPYC-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='GraniteRapids-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-128'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-256'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx10-512'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='prefetchiti'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Haswell-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-noTSX'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v6'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Icelake-Server-v7'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='IvyBridge-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='KnightsMill'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512er'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512pf'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='KnightsMill-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4fmaps'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-4vnniw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512er'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512pf'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G4-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tbm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Opteron_G5-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fma4'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tbm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xop'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SapphireRapids-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='amx-tile'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-bf16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-fp16'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512-vpopcntdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bitalg'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vbmi2'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrc'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fzrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='la57'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='taa-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='tsx-ldtrk'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xfd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SierraForest'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cmpccxadd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='SierraForest-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ifma'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-ne-convert'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx-vnni-int8'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='bus-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cmpccxadd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fbsdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='fsrs'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ibrs-all'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mcdt-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pbrsb-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='psdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='sbdr-ssdp-no'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='serialize'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vaes'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='vpclmulqdq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Client-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='hle'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='rtm'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Skylake-Server-v5'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512bw'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512cd'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512dq'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512f'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='avx512vl'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='invpcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pcid'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='pku'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='mpx'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v2'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v3'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='core-capability'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='split-lock-detect'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='Snowridge-v4'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='cldemote'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='erms'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='gfni'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdir64b'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='movdiri'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='xsaves'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='athlon'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='athlon-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='core2duo'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='core2duo-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='coreduo'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='coreduo-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='n270'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='n270-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='ss'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='phenom'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <blockers model='phenom-v1'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnow'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:         <feature name='3dnowext'/>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       </blockers>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </mode>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   </cpu>
Nov 29 05:28:25 compute-0 nova_compute[254898]:   <memoryBacking supported='yes'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     <enum name='sourceType'>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>file</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>anonymous</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:       <value>memfd</value>
Nov 29 05:28:25 compute-0 nova_compute[254898]:     </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:   </memoryBacking>
Nov 29 05:28:26 compute-0 nova_compute[254898]:   <devices>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <disk supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='diskDevice'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>disk</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>cdrom</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>floppy</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>lun</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='bus'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>fdc</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>scsi</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>sata</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>virtio-transitional</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>virtio-non-transitional</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </disk>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <graphics supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>vnc</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>egl-headless</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>dbus</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </graphics>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <video supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='modelType'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>vga</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>cirrus</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>none</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>bochs</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>ramfb</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </video>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <hostdev supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='mode'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>subsystem</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='startupPolicy'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>default</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>mandatory</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>requisite</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>optional</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='subsysType'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>pci</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>scsi</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='capsType'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='pciBackend'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </hostdev>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <rng supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>virtio</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>virtio-transitional</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>virtio-non-transitional</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>random</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>egd</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>builtin</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </rng>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <filesystem supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='driverType'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>path</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>handle</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>virtiofs</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </filesystem>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <tpm supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>tpm-tis</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>tpm-crb</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>emulator</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>external</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='backendVersion'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>2.0</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </tpm>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <redirdev supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='bus'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>usb</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </redirdev>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <channel supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>pty</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>unix</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </channel>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <crypto supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='model'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>qemu</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='backendModel'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>builtin</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </crypto>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <interface supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='backendType'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>default</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>passt</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </interface>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <panic supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='model'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>isa</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>hyperv</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </panic>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <console supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='type'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>null</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>vc</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>pty</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>dev</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>file</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>pipe</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>stdio</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>udp</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>tcp</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>unix</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>qemu-vdagent</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>dbus</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </console>
Nov 29 05:28:26 compute-0 nova_compute[254898]:   </devices>
Nov 29 05:28:26 compute-0 nova_compute[254898]:   <features>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <gic supported='no'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <vmcoreinfo supported='yes'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <genid supported='yes'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <backingStoreInput supported='yes'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <backup supported='yes'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <async-teardown supported='yes'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <ps2 supported='yes'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <sev supported='no'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <sgx supported='no'/>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <hyperv supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='features'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>relaxed</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>vapic</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>spinlocks</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>vpindex</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>runtime</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>synic</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>stimer</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>reset</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>vendor_id</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>frequencies</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>reenlightenment</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>tlbflush</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>ipi</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>avic</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>emsr_bitmap</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>xmm_input</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <defaults>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <spinlocks>4095</spinlocks>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <stimer_direct>on</stimer_direct>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <tlbflush_direct>on</tlbflush_direct>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <tlbflush_extended>on</tlbflush_extended>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </defaults>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </hyperv>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     <launchSecurity supported='yes'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       <enum name='sectype'>
Nov 29 05:28:26 compute-0 nova_compute[254898]:         <value>tdx</value>
Nov 29 05:28:26 compute-0 nova_compute[254898]:       </enum>
Nov 29 05:28:26 compute-0 nova_compute[254898]:     </launchSecurity>
Nov 29 05:28:26 compute-0 nova_compute[254898]:   </features>
Nov 29 05:28:26 compute-0 nova_compute[254898]: </domainCapabilities>
Nov 29 05:28:26 compute-0 nova_compute[254898]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.939 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.939 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.939 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.939 254902 INFO nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Secure Boot support detected
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.941 254902 INFO nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.942 254902 INFO nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.951 254902 DEBUG nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:25.979 254902 INFO nova.virt.node [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Determined node identity 59594bc8-0143-475b-913f-cbe106b48966 from /var/lib/nova/compute_id
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.002 254902 WARNING nova.compute.manager [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Compute nodes ['59594bc8-0143-475b-913f-cbe106b48966'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.038 254902 INFO nova.compute.manager [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.067 254902 WARNING nova.compute.manager [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.068 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.068 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.068 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.069 254902 DEBUG nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.069 254902 DEBUG oslo_concurrency.processutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:28:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:28:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129658402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.480 254902 DEBUG oslo_concurrency.processutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:28:26 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 05:28:26 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 29 05:28:26 compute-0 ceph-mon[75176]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3129658402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.869 254902 WARNING nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.871 254902 DEBUG nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5147MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.871 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.871 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.903 254902 WARNING nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] No compute node record for compute-0.ctlplane.example.com:59594bc8-0143-475b-913f-cbe106b48966: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 59594bc8-0143-475b-913f-cbe106b48966 could not be found.
Nov 29 05:28:26 compute-0 nova_compute[254898]: 2025-11-29 05:28:26.928 254902 INFO nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 59594bc8-0143-475b-913f-cbe106b48966
Nov 29 05:28:27 compute-0 nova_compute[254898]: 2025-11-29 05:28:27.002 254902 DEBUG nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:28:27 compute-0 nova_compute[254898]: 2025-11-29 05:28:27.003 254902 DEBUG nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:28:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:27 compute-0 nova_compute[254898]: 2025-11-29 05:28:27.983 254902 INFO nova.scheduler.client.report [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] [req-7da06b63-3af5-41bd-b235-19aadffc157d] Created resource provider record via placement API for resource provider with UUID 59594bc8-0143-475b-913f-cbe106b48966 and name compute-0.ctlplane.example.com.
Nov 29 05:28:28 compute-0 nova_compute[254898]: 2025-11-29 05:28:28.397 254902 DEBUG oslo_concurrency.processutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:28:28 compute-0 ceph-mon[75176]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:28:28 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2587030157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:28:28 compute-0 nova_compute[254898]: 2025-11-29 05:28:28.834 254902 DEBUG oslo_concurrency.processutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:28:28 compute-0 nova_compute[254898]: 2025-11-29 05:28:28.838 254902 DEBUG nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 05:28:28 compute-0 nova_compute[254898]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 29 05:28:28 compute-0 nova_compute[254898]: 2025-11-29 05:28:28.838 254902 INFO nova.virt.libvirt.host [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] kernel doesn't support AMD SEV
Nov 29 05:28:28 compute-0 nova_compute[254898]: 2025-11-29 05:28:28.839 254902 DEBUG nova.compute.provider_tree [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 05:28:28 compute-0 nova_compute[254898]: 2025-11-29 05:28:28.839 254902 DEBUG nova.virt.libvirt.driver [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 29 05:28:28 compute-0 nova_compute[254898]: 2025-11-29 05:28:28.942 254902 DEBUG nova.scheduler.client.report [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Updated inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 29 05:28:28 compute-0 nova_compute[254898]: 2025-11-29 05:28:28.943 254902 DEBUG nova.compute.provider_tree [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Updating resource provider 59594bc8-0143-475b-913f-cbe106b48966 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 05:28:28 compute-0 nova_compute[254898]: 2025-11-29 05:28:28.943 254902 DEBUG nova.compute.provider_tree [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 05:28:29 compute-0 nova_compute[254898]: 2025-11-29 05:28:29.094 254902 DEBUG nova.compute.provider_tree [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Updating resource provider 59594bc8-0143-475b-913f-cbe106b48966 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 29 05:28:29 compute-0 nova_compute[254898]: 2025-11-29 05:28:29.132 254902 DEBUG nova.compute.resource_tracker [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:28:29 compute-0 nova_compute[254898]: 2025-11-29 05:28:29.133 254902 DEBUG oslo_concurrency.lockutils [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:28:29 compute-0 nova_compute[254898]: 2025-11-29 05:28:29.133 254902 DEBUG nova.service [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 29 05:28:29 compute-0 nova_compute[254898]: 2025-11-29 05:28:29.249 254902 DEBUG nova.service [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 29 05:28:29 compute-0 nova_compute[254898]: 2025-11-29 05:28:29.250 254902 DEBUG nova.servicegroup.drivers.db [None req-84a22ce2-f35d-494d-be98-8c4ff10edeca - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 29 05:28:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:29 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2587030157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:28:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:30 compute-0 ceph-mon[75176]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:32 compute-0 ceph-mon[75176]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:34 compute-0 ceph-mon[75176]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:34 compute-0 sshd-session[255267]: Invalid user admin from 120.48.175.69 port 58298
Nov 29 05:28:35 compute-0 sshd-session[255267]: Received disconnect from 120.48.175.69 port 58298:11: Bye Bye [preauth]
Nov 29 05:28:35 compute-0 sshd-session[255267]: Disconnected from invalid user admin 120.48.175.69 port 58298 [preauth]
Nov 29 05:28:35 compute-0 nova_compute[254898]: 2025-11-29 05:28:35.251 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:28:35 compute-0 nova_compute[254898]: 2025-11-29 05:28:35.278 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:28:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:36 compute-0 podman[255269]: 2025-11-29 05:28:36.069596073 +0000 UTC m=+0.114539000 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 05:28:36 compute-0 ceph-mon[75176]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:38 compute-0 ceph-mon[75176]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:40 compute-0 ceph-mon[75176]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:41 compute-0 podman[255291]: 2025-11-29 05:28:41.027609732 +0000 UTC m=+0.077159436 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:28:41
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:28:41 compute-0 sshd-session[255289]: Invalid user khan from 45.120.216.232 port 43204
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:28:41 compute-0 sshd-session[255289]: Received disconnect from 45.120.216.232 port 43204:11: Bye Bye [preauth]
Nov 29 05:28:41 compute-0 sshd-session[255289]: Disconnected from invalid user khan 45.120.216.232 port 43204 [preauth]
Nov 29 05:28:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:42 compute-0 ceph-mon[75176]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:44 compute-0 ceph-mon[75176]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:46 compute-0 ceph-mon[75176]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:48 compute-0 ceph-mon[75176]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:49 compute-0 sudo[255318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:28:49 compute-0 sudo[255318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:49 compute-0 sudo[255318]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:49 compute-0 sudo[255343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:28:49 compute-0 sudo[255343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:49 compute-0 sudo[255343]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:49 compute-0 sudo[255368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:28:49 compute-0 sudo[255368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:49 compute-0 sudo[255368]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:49 compute-0 sudo[255393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:28:49 compute-0 sudo[255393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:49 compute-0 podman[255417]: 2025-11-29 05:28:49.465204838 +0000 UTC m=+0.064405108 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 05:28:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:49 compute-0 sudo[255393]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:28:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:28:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:28:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:28:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:28:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:28:49 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 09b80710-6c2b-43d5-a118-a3a9df355a4a does not exist
Nov 29 05:28:49 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 2407b23f-abc0-4daf-a606-d322671ac326 does not exist
Nov 29 05:28:49 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev cb4e024d-82dd-47d5-bcc3-6b5fe1660fb8 does not exist
Nov 29 05:28:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:28:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:28:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:28:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:28:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:28:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:28:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:28:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:28:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:28:50 compute-0 sudo[255468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:28:50 compute-0 sudo[255468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:50 compute-0 sudo[255468]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:50 compute-0 sudo[255493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:28:50 compute-0 sudo[255493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:50 compute-0 sudo[255493]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:50 compute-0 sudo[255518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:28:50 compute-0 sudo[255518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:50 compute-0 sudo[255518]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:50 compute-0 sudo[255543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:28:50 compute-0 sudo[255543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:50 compute-0 podman[255610]: 2025-11-29 05:28:50.540158215 +0000 UTC m=+0.063602860 container create 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 05:28:50 compute-0 systemd[1]: Started libpod-conmon-959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696.scope.
Nov 29 05:28:50 compute-0 podman[255610]: 2025-11-29 05:28:50.507198508 +0000 UTC m=+0.030643213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:28:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:28:50 compute-0 podman[255610]: 2025-11-29 05:28:50.633466096 +0000 UTC m=+0.156910731 container init 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 29 05:28:50 compute-0 podman[255610]: 2025-11-29 05:28:50.643214182 +0000 UTC m=+0.166658787 container start 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 05:28:50 compute-0 podman[255610]: 2025-11-29 05:28:50.646435917 +0000 UTC m=+0.169880632 container attach 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:28:50 compute-0 amazing_galois[255627]: 167 167
Nov 29 05:28:50 compute-0 systemd[1]: libpod-959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696.scope: Deactivated successfully.
Nov 29 05:28:50 compute-0 podman[255610]: 2025-11-29 05:28:50.651116846 +0000 UTC m=+0.174561501 container died 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:28:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4582dbd73a6820c9abe75fdd30115dded2835d37d959a629f85c2e6399618c78-merged.mount: Deactivated successfully.
Nov 29 05:28:50 compute-0 podman[255610]: 2025-11-29 05:28:50.689816076 +0000 UTC m=+0.213260691 container remove 959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:28:50 compute-0 systemd[1]: libpod-conmon-959e62e44c65584ade093498806de9d12ed835e12c2dd6c6c0cb2e50e6879696.scope: Deactivated successfully.
Nov 29 05:28:50 compute-0 podman[255649]: 2025-11-29 05:28:50.885004056 +0000 UTC m=+0.053342262 container create 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 05:28:50 compute-0 systemd[1]: Started libpod-conmon-139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb.scope.
Nov 29 05:28:50 compute-0 ceph-mon[75176]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:28:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:28:50 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:28:50 compute-0 podman[255649]: 2025-11-29 05:28:50.863612259 +0000 UTC m=+0.031950485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:28:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:50 compute-0 podman[255649]: 2025-11-29 05:28:50.988747029 +0000 UTC m=+0.157085305 container init 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:28:51 compute-0 podman[255649]: 2025-11-29 05:28:51.000587475 +0000 UTC m=+0.168925691 container start 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:28:51 compute-0 podman[255649]: 2025-11-29 05:28:51.003902212 +0000 UTC m=+0.172240528 container attach 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:28:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:28:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1146402093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:28:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:28:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1146402093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:28:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:28:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/750604550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:28:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:28:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/750604550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:28:51 compute-0 sshd-session[255609]: Invalid user proxyuser from 120.48.175.69 port 34008
Nov 29 05:28:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:28:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/478911818' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:28:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:28:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/478911818' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:28:51 compute-0 sshd-session[255609]: Received disconnect from 120.48.175.69 port 34008:11: Bye Bye [preauth]
Nov 29 05:28:51 compute-0 sshd-session[255609]: Disconnected from invalid user proxyuser 120.48.175.69 port 34008 [preauth]
Nov 29 05:28:51 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1146402093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:28:51 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1146402093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:28:51 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/750604550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:28:51 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/750604550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:28:51 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/478911818' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:28:51 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/478911818' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:28:52 compute-0 pedantic_noether[255665]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:28:52 compute-0 pedantic_noether[255665]: --> relative data size: 1.0
Nov 29 05:28:52 compute-0 pedantic_noether[255665]: --> All data devices are unavailable
Nov 29 05:28:52 compute-0 systemd[1]: libpod-139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb.scope: Deactivated successfully.
Nov 29 05:28:52 compute-0 podman[255649]: 2025-11-29 05:28:52.173898066 +0000 UTC m=+1.342236272 container died 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:28:52 compute-0 systemd[1]: libpod-139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb.scope: Consumed 1.074s CPU time.
Nov 29 05:28:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-02ef76b31fabf02d249193f847866752aaf2842f521e7f2496e534f48bb2786b-merged.mount: Deactivated successfully.
Nov 29 05:28:52 compute-0 podman[255649]: 2025-11-29 05:28:52.227386081 +0000 UTC m=+1.395724287 container remove 139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:28:52 compute-0 systemd[1]: libpod-conmon-139dbfd07ed2d0af543978ebe0c7b88862fc02f710129a76c0594698755d9ddb.scope: Deactivated successfully.
Nov 29 05:28:52 compute-0 sudo[255543]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:52 compute-0 sudo[255704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:28:52 compute-0 sudo[255704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:52 compute-0 sudo[255704]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:52 compute-0 sudo[255729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:28:52 compute-0 sudo[255729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:52 compute-0 sudo[255729]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:52 compute-0 sudo[255754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:28:52 compute-0 sudo[255754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:52 compute-0 sudo[255754]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:52 compute-0 sudo[255779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:28:52 compute-0 sudo[255779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:52 compute-0 podman[255845]: 2025-11-29 05:28:52.902640107 +0000 UTC m=+0.062620107 container create ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:28:52 compute-0 systemd[1]: Started libpod-conmon-ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3.scope.
Nov 29 05:28:52 compute-0 podman[255845]: 2025-11-29 05:28:52.873785107 +0000 UTC m=+0.033765197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:28:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:28:53 compute-0 podman[255845]: 2025-11-29 05:28:53.001006676 +0000 UTC m=+0.160986726 container init ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:28:53 compute-0 ceph-mon[75176]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:53 compute-0 podman[255845]: 2025-11-29 05:28:53.013331372 +0000 UTC m=+0.173311382 container start ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:28:53 compute-0 podman[255845]: 2025-11-29 05:28:53.017143491 +0000 UTC m=+0.177123491 container attach ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:28:53 compute-0 strange_chaum[255861]: 167 167
Nov 29 05:28:53 compute-0 systemd[1]: libpod-ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3.scope: Deactivated successfully.
Nov 29 05:28:53 compute-0 podman[255845]: 2025-11-29 05:28:53.022403473 +0000 UTC m=+0.182383493 container died ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 05:28:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c7cae146652f18d06c6f3c3b54bbbdacc3f83dc100a5cc628974c419a6efcee-merged.mount: Deactivated successfully.
Nov 29 05:28:53 compute-0 podman[255845]: 2025-11-29 05:28:53.062979537 +0000 UTC m=+0.222959537 container remove ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 05:28:53 compute-0 systemd[1]: libpod-conmon-ed48ed705146d3e58a8bd93d2d58e090a025a8b0c96fe2fd7320d8c9239634f3.scope: Deactivated successfully.
Nov 29 05:28:53 compute-0 podman[255885]: 2025-11-29 05:28:53.227622546 +0000 UTC m=+0.045518029 container create a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:28:53 compute-0 systemd[1]: Started libpod-conmon-a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586.scope.
Nov 29 05:28:53 compute-0 podman[255885]: 2025-11-29 05:28:53.2109746 +0000 UTC m=+0.028870103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:28:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626405528cdd341b79845f7cc7e0b54bc25d5b678155c04f7119cda8701ea909/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626405528cdd341b79845f7cc7e0b54bc25d5b678155c04f7119cda8701ea909/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626405528cdd341b79845f7cc7e0b54bc25d5b678155c04f7119cda8701ea909/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626405528cdd341b79845f7cc7e0b54bc25d5b678155c04f7119cda8701ea909/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:53 compute-0 podman[255885]: 2025-11-29 05:28:53.342335064 +0000 UTC m=+0.160230577 container init a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:28:53 compute-0 podman[255885]: 2025-11-29 05:28:53.356793231 +0000 UTC m=+0.174688744 container start a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:28:53 compute-0 podman[255885]: 2025-11-29 05:28:53.360583759 +0000 UTC m=+0.178479272 container attach a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:28:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:54 compute-0 epic_diffie[255902]: {
Nov 29 05:28:54 compute-0 epic_diffie[255902]:     "0": [
Nov 29 05:28:54 compute-0 epic_diffie[255902]:         {
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "devices": [
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "/dev/loop3"
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             ],
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_name": "ceph_lv0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_size": "21470642176",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "name": "ceph_lv0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "tags": {
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.cluster_name": "ceph",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.crush_device_class": "",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.encrypted": "0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.osd_id": "0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.type": "block",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.vdo": "0"
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             },
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "type": "block",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "vg_name": "ceph_vg0"
Nov 29 05:28:54 compute-0 epic_diffie[255902]:         }
Nov 29 05:28:54 compute-0 epic_diffie[255902]:     ],
Nov 29 05:28:54 compute-0 epic_diffie[255902]:     "1": [
Nov 29 05:28:54 compute-0 epic_diffie[255902]:         {
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "devices": [
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "/dev/loop4"
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             ],
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_name": "ceph_lv1",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_size": "21470642176",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "name": "ceph_lv1",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "tags": {
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.cluster_name": "ceph",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.crush_device_class": "",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.encrypted": "0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.osd_id": "1",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.type": "block",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.vdo": "0"
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             },
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "type": "block",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "vg_name": "ceph_vg1"
Nov 29 05:28:54 compute-0 epic_diffie[255902]:         }
Nov 29 05:28:54 compute-0 epic_diffie[255902]:     ],
Nov 29 05:28:54 compute-0 epic_diffie[255902]:     "2": [
Nov 29 05:28:54 compute-0 epic_diffie[255902]:         {
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "devices": [
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "/dev/loop5"
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             ],
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_name": "ceph_lv2",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_size": "21470642176",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "name": "ceph_lv2",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "tags": {
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.cluster_name": "ceph",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.crush_device_class": "",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.encrypted": "0",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.osd_id": "2",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.type": "block",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:                 "ceph.vdo": "0"
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             },
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "type": "block",
Nov 29 05:28:54 compute-0 epic_diffie[255902]:             "vg_name": "ceph_vg2"
Nov 29 05:28:54 compute-0 epic_diffie[255902]:         }
Nov 29 05:28:54 compute-0 epic_diffie[255902]:     ]
Nov 29 05:28:54 compute-0 epic_diffie[255902]: }
Nov 29 05:28:54 compute-0 systemd[1]: libpod-a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586.scope: Deactivated successfully.
Nov 29 05:28:54 compute-0 podman[255885]: 2025-11-29 05:28:54.241845727 +0000 UTC m=+1.059741240 container died a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:28:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-626405528cdd341b79845f7cc7e0b54bc25d5b678155c04f7119cda8701ea909-merged.mount: Deactivated successfully.
Nov 29 05:28:54 compute-0 podman[255885]: 2025-11-29 05:28:54.320728672 +0000 UTC m=+1.138624185 container remove a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:28:54 compute-0 systemd[1]: libpod-conmon-a0cb4089c5d175bfe75f059edbd9582d6445dbd49f8de3b90fe2c4d732f31586.scope: Deactivated successfully.
Nov 29 05:28:54 compute-0 sudo[255779]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:28:54 compute-0 sudo[255925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:28:54 compute-0 sudo[255925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:54 compute-0 sudo[255925]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:54 compute-0 sudo[255950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:28:54 compute-0 sudo[255950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:54 compute-0 sudo[255950]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:55 compute-0 ceph-mon[75176]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:55 compute-0 sudo[255975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:28:55 compute-0 sudo[255975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:55 compute-0 sudo[255975]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:55 compute-0 sudo[256000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:28:55 compute-0 sudo[256000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:55 compute-0 podman[256066]: 2025-11-29 05:28:55.478406009 +0000 UTC m=+0.050484585 container create c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:28:55 compute-0 systemd[1]: Started libpod-conmon-c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b.scope.
Nov 29 05:28:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:28:55 compute-0 podman[256066]: 2025-11-29 05:28:55.463914442 +0000 UTC m=+0.035993038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:28:55 compute-0 podman[256066]: 2025-11-29 05:28:55.576429049 +0000 UTC m=+0.148507705 container init c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:28:55 compute-0 podman[256066]: 2025-11-29 05:28:55.583976174 +0000 UTC m=+0.156054750 container start c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:28:55 compute-0 podman[256066]: 2025-11-29 05:28:55.587408995 +0000 UTC m=+0.159487671 container attach c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:28:55 compute-0 gifted_moore[256082]: 167 167
Nov 29 05:28:55 compute-0 systemd[1]: libpod-c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b.scope: Deactivated successfully.
Nov 29 05:28:55 compute-0 podman[256066]: 2025-11-29 05:28:55.589595185 +0000 UTC m=+0.161673771 container died c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:28:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-191f823807153d5116ea355375f2753b49a21b645644ad729db56fa9a78f5716-merged.mount: Deactivated successfully.
Nov 29 05:28:55 compute-0 podman[256066]: 2025-11-29 05:28:55.664373625 +0000 UTC m=+0.236452211 container remove c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_moore, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:28:55 compute-0 systemd[1]: libpod-conmon-c61669d597e21a763273c26f9eb3160be06e039f7e0d278c1e1cbd6afc40a25b.scope: Deactivated successfully.
Nov 29 05:28:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:55 compute-0 podman[256106]: 2025-11-29 05:28:55.861652754 +0000 UTC m=+0.055351649 container create f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:28:55 compute-0 systemd[1]: Started libpod-conmon-f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc.scope.
Nov 29 05:28:55 compute-0 podman[256106]: 2025-11-29 05:28:55.832753661 +0000 UTC m=+0.026452606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:28:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273042711b15092f8b8914b6e02797e976ba3deb8e8049c50f1082fa04708d4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273042711b15092f8b8914b6e02797e976ba3deb8e8049c50f1082fa04708d4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273042711b15092f8b8914b6e02797e976ba3deb8e8049c50f1082fa04708d4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273042711b15092f8b8914b6e02797e976ba3deb8e8049c50f1082fa04708d4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:28:55 compute-0 podman[256106]: 2025-11-29 05:28:55.956897959 +0000 UTC m=+0.150596904 container init f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:28:55 compute-0 podman[256106]: 2025-11-29 05:28:55.967394793 +0000 UTC m=+0.161093678 container start f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:28:55 compute-0 podman[256106]: 2025-11-29 05:28:55.971668832 +0000 UTC m=+0.165367817 container attach f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:28:56 compute-0 gifted_brown[256122]: {
Nov 29 05:28:56 compute-0 gifted_brown[256122]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "osd_id": 0,
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "type": "bluestore"
Nov 29 05:28:56 compute-0 gifted_brown[256122]:     },
Nov 29 05:28:56 compute-0 gifted_brown[256122]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "osd_id": 1,
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "type": "bluestore"
Nov 29 05:28:56 compute-0 gifted_brown[256122]:     },
Nov 29 05:28:56 compute-0 gifted_brown[256122]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "osd_id": 2,
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:28:56 compute-0 gifted_brown[256122]:         "type": "bluestore"
Nov 29 05:28:56 compute-0 gifted_brown[256122]:     }
Nov 29 05:28:56 compute-0 gifted_brown[256122]: }
Nov 29 05:28:56 compute-0 systemd[1]: libpod-f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc.scope: Deactivated successfully.
Nov 29 05:28:56 compute-0 systemd[1]: libpod-f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc.scope: Consumed 1.033s CPU time.
Nov 29 05:28:56 compute-0 podman[256106]: 2025-11-29 05:28:56.988213017 +0000 UTC m=+1.181911912 container died f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:28:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-273042711b15092f8b8914b6e02797e976ba3deb8e8049c50f1082fa04708d4e-merged.mount: Deactivated successfully.
Nov 29 05:28:57 compute-0 ceph-mon[75176]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:57 compute-0 podman[256106]: 2025-11-29 05:28:57.057338505 +0000 UTC m=+1.251037360 container remove f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 05:28:57 compute-0 systemd[1]: libpod-conmon-f7b5040c1c6b27c73a102c6a32a85c0cd2a052e5295f0270dcb67330d8b562dc.scope: Deactivated successfully.
Nov 29 05:28:57 compute-0 sudo[256000]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:28:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:28:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:28:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:28:57 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ee574066-3849-4fcf-9706-b8a8c143da4f does not exist
Nov 29 05:28:57 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ac575427-fb0a-4d5a-baab-9defb75c9b88 does not exist
Nov 29 05:28:57 compute-0 sudo[256168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:28:57 compute-0 sudo[256168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:57 compute-0 sudo[256168]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:57 compute-0 sudo[256193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:28:57 compute-0 sudo[256193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:28:57 compute-0 sudo[256193]: pam_unix(sudo:session): session closed for user root
Nov 29 05:28:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:28:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:28:58 compute-0 ceph-mon[75176]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:59 compute-0 sshd-session[256218]: Invalid user builder from 152.32.145.111 port 51654
Nov 29 05:28:59 compute-0 sshd-session[256218]: Received disconnect from 152.32.145.111 port 51654:11: Bye Bye [preauth]
Nov 29 05:28:59 compute-0 sshd-session[256218]: Disconnected from invalid user builder 152.32.145.111 port 51654 [preauth]
Nov 29 05:28:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:28:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:00 compute-0 ceph-mon[75176]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:02 compute-0 ceph-mon[75176]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:04 compute-0 ceph-mon[75176]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:06 compute-0 ceph-mon[75176]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:07 compute-0 podman[256223]: 2025-11-29 05:29:07.0412755 +0000 UTC m=+0.090150917 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 05:29:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:08 compute-0 ceph-mon[75176]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:09 compute-0 sshd-session[256220]: Connection closed by 101.47.141.125 port 44984 [preauth]
Nov 29 05:29:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:10 compute-0 sshd-session[256221]: Invalid user astra from 120.48.175.69 port 37958
Nov 29 05:29:10 compute-0 sshd-session[256221]: Received disconnect from 120.48.175.69 port 37958:11: Bye Bye [preauth]
Nov 29 05:29:10 compute-0 sshd-session[256221]: Disconnected from invalid user astra 120.48.175.69 port 37958 [preauth]
Nov 29 05:29:10 compute-0 ceph-mon[75176]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:29:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:29:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:29:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:29:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:29:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:29:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:12 compute-0 podman[256245]: 2025-11-29 05:29:12.074197776 +0000 UTC m=+0.128056069 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:29:12 compute-0 ceph-mon[75176]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:29:13.740 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:29:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:29:13.740 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:29:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:29:13.741 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:29:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:14 compute-0 ceph-mon[75176]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:16 compute-0 ceph-mon[75176]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:18 compute-0 ceph-mon[75176]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:19 compute-0 podman[256272]: 2025-11-29 05:29:19.997129073 +0000 UTC m=+0.057425117 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 05:29:20 compute-0 ceph-mon[75176]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:22 compute-0 ceph-mon[75176]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.956 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.957 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.957 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.957 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:29:24 compute-0 ceph-mon[75176]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.991 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.991 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.992 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.992 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.993 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.993 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.993 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.994 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:29:24 compute-0 nova_compute[254898]: 2025-11-29 05:29:24.994 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.025 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.025 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.026 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.026 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.027 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:29:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:29:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141797165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.501 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.670 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.671 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5168MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.671 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.672 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.749 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.749 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:29:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:25 compute-0 nova_compute[254898]: 2025-11-29 05:29:25.783 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:29:25 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/141797165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:29:26 compute-0 sshd-session[256291]: Received disconnect from 120.48.175.69 port 41940:11: Bye Bye [preauth]
Nov 29 05:29:26 compute-0 sshd-session[256291]: Disconnected from authenticating user root 120.48.175.69 port 41940 [preauth]
Nov 29 05:29:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:29:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142771401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:29:26 compute-0 nova_compute[254898]: 2025-11-29 05:29:26.177 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:29:26 compute-0 nova_compute[254898]: 2025-11-29 05:29:26.185 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:29:26 compute-0 nova_compute[254898]: 2025-11-29 05:29:26.211 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:29:26 compute-0 nova_compute[254898]: 2025-11-29 05:29:26.214 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:29:26 compute-0 nova_compute[254898]: 2025-11-29 05:29:26.214 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:29:26 compute-0 ceph-mon[75176]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2142771401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:29:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:28 compute-0 ceph-mon[75176]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:31 compute-0 ceph-mon[75176]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:33 compute-0 ceph-mon[75176]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 29 05:29:33 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2494357982' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 05:29:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 05:29:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 05:29:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 05:29:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:34 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/2494357982' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 05:29:34 compute-0 ceph-mon[75176]: from='client.14349 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 05:29:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:35 compute-0 ceph-mon[75176]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:37 compute-0 ceph-mon[75176]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:29:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5780 writes, 24K keys, 5780 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5780 writes, 976 syncs, 5.92 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:29:38 compute-0 podman[256337]: 2025-11-29 05:29:38.003325895 +0000 UTC m=+0.054371016 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 05:29:39 compute-0 ceph-mon[75176]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:41 compute-0 ceph-mon[75176]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:29:41
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.meta', 'vms', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images']
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:29:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:42 compute-0 ceph-mon[75176]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:29:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 7055 writes, 29K keys, 7055 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7055 writes, 1300 syncs, 5.43 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 278 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:29:43 compute-0 podman[256359]: 2025-11-29 05:29:43.046615782 +0000 UTC m=+0.098721228 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:29:43 compute-0 sshd-session[256357]: Invalid user minecraft from 120.48.175.69 port 45818
Nov 29 05:29:43 compute-0 sshd-session[256357]: Received disconnect from 120.48.175.69 port 45818:11: Bye Bye [preauth]
Nov 29 05:29:43 compute-0 sshd-session[256357]: Disconnected from invalid user minecraft 120.48.175.69 port 45818 [preauth]
Nov 29 05:29:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:44 compute-0 ceph-mon[75176]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:46 compute-0 ceph-mon[75176]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:29:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5631 writes, 23K keys, 5631 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5631 writes, 860 syncs, 6.55 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:29:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 29 05:29:48 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 05:29:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 05:29:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 05:29:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 29 05:29:48 compute-0 ceph-mon[75176]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:48 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 29 05:29:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:49 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 29 05:29:50 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Check health
Nov 29 05:29:50 compute-0 ceph-mon[75176]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:51 compute-0 podman[256385]: 2025-11-29 05:29:51.041653137 +0000 UTC m=+0.085175482 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:29:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:52 compute-0 ceph-mon[75176]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:53 compute-0 sshd-session[256404]: Invalid user kiosk from 45.120.216.232 port 42094
Nov 29 05:29:53 compute-0 sshd-session[256404]: Received disconnect from 45.120.216.232 port 42094:11: Bye Bye [preauth]
Nov 29 05:29:53 compute-0 sshd-session[256404]: Disconnected from invalid user kiosk 45.120.216.232 port 42094 [preauth]
Nov 29 05:29:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:29:54 compute-0 ceph-mon[75176]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:56 compute-0 ceph-mon[75176]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:57 compute-0 sudo[256406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:29:57 compute-0 sudo[256406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:57 compute-0 sudo[256406]: pam_unix(sudo:session): session closed for user root
Nov 29 05:29:57 compute-0 sudo[256431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:29:57 compute-0 sudo[256431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:57 compute-0 sudo[256431]: pam_unix(sudo:session): session closed for user root
Nov 29 05:29:57 compute-0 sudo[256456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:29:57 compute-0 sudo[256456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:57 compute-0 sudo[256456]: pam_unix(sudo:session): session closed for user root
Nov 29 05:29:57 compute-0 sudo[256481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 05:29:57 compute-0 sudo[256481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:57 compute-0 sudo[256481]: pam_unix(sudo:session): session closed for user root
Nov 29 05:29:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:29:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:29:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:29:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:29:57 compute-0 sudo[256527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:29:57 compute-0 sudo[256527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:57 compute-0 sudo[256527]: pam_unix(sudo:session): session closed for user root
Nov 29 05:29:57 compute-0 sudo[256552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:29:57 compute-0 sudo[256552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:57 compute-0 sudo[256552]: pam_unix(sudo:session): session closed for user root
Nov 29 05:29:57 compute-0 sudo[256577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:29:57 compute-0 sudo[256577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:57 compute-0 sudo[256577]: pam_unix(sudo:session): session closed for user root
Nov 29 05:29:58 compute-0 sudo[256602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:29:58 compute-0 sudo[256602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:58 compute-0 sudo[256602]: pam_unix(sudo:session): session closed for user root
Nov 29 05:29:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:29:58 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:29:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:29:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:29:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:29:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:29:58 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d73e39a9-1416-4d7a-94da-38014dc95c33 does not exist
Nov 29 05:29:58 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 382e6138-700e-4431-8fd1-18dd7f6bb828 does not exist
Nov 29 05:29:58 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 8929c39b-2d03-4501-82e1-b65e37189af8 does not exist
Nov 29 05:29:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:29:58 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:29:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:29:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:29:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:29:58 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:29:58 compute-0 sudo[256658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:29:58 compute-0 sudo[256658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:58 compute-0 sudo[256658]: pam_unix(sudo:session): session closed for user root
Nov 29 05:29:58 compute-0 sudo[256683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:29:58 compute-0 sudo[256683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:58 compute-0 sudo[256683]: pam_unix(sudo:session): session closed for user root
Nov 29 05:29:58 compute-0 ceph-mon[75176]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:29:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:29:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:29:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:29:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:29:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:29:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:29:58 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:29:58 compute-0 sudo[256708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:29:58 compute-0 sudo[256708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:58 compute-0 sudo[256708]: pam_unix(sudo:session): session closed for user root
Nov 29 05:29:58 compute-0 sudo[256733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:29:58 compute-0 sudo[256733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:29:59 compute-0 podman[256799]: 2025-11-29 05:29:59.301667365 +0000 UTC m=+0.064211235 container create 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:29:59 compute-0 systemd[1]: Started libpod-conmon-2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a.scope.
Nov 29 05:29:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:29:59 compute-0 podman[256799]: 2025-11-29 05:29:59.27954463 +0000 UTC m=+0.042088600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:29:59 compute-0 podman[256799]: 2025-11-29 05:29:59.380453977 +0000 UTC m=+0.142997897 container init 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:29:59 compute-0 podman[256799]: 2025-11-29 05:29:59.38570856 +0000 UTC m=+0.148252450 container start 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 05:29:59 compute-0 podman[256799]: 2025-11-29 05:29:59.389445806 +0000 UTC m=+0.151989696 container attach 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:29:59 compute-0 eager_curie[256815]: 167 167
Nov 29 05:29:59 compute-0 systemd[1]: libpod-2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a.scope: Deactivated successfully.
Nov 29 05:29:59 compute-0 podman[256799]: 2025-11-29 05:29:59.391700279 +0000 UTC m=+0.154244169 container died 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:29:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b213f9ac7cb69d39c00675ad2dcd5f0bc3acb5ad07e106cc295d53b78313302-merged.mount: Deactivated successfully.
Nov 29 05:29:59 compute-0 podman[256799]: 2025-11-29 05:29:59.43178209 +0000 UTC m=+0.194325990 container remove 2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:29:59 compute-0 systemd[1]: libpod-conmon-2ad6d4a0053a3878907101a1af834a10a0306f797c23be03b014f190a732723a.scope: Deactivated successfully.
Nov 29 05:29:59 compute-0 podman[256839]: 2025-11-29 05:29:59.658801912 +0000 UTC m=+0.066405206 container create a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:29:59 compute-0 systemd[1]: Started libpod-conmon-a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c.scope.
Nov 29 05:29:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:29:59 compute-0 podman[256839]: 2025-11-29 05:29:59.636874021 +0000 UTC m=+0.044477305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:29:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:29:59 compute-0 podman[256839]: 2025-11-29 05:29:59.751399865 +0000 UTC m=+0.159003219 container init a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:29:59 compute-0 podman[256839]: 2025-11-29 05:29:59.764228964 +0000 UTC m=+0.171832278 container start a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:29:59 compute-0 podman[256839]: 2025-11-29 05:29:59.769213289 +0000 UTC m=+0.176816563 container attach a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:29:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:29:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:00 compute-0 ceph-mon[75176]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:00 compute-0 beautiful_lichterman[256857]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:30:00 compute-0 beautiful_lichterman[256857]: --> relative data size: 1.0
Nov 29 05:30:00 compute-0 beautiful_lichterman[256857]: --> All data devices are unavailable
Nov 29 05:30:00 compute-0 systemd[1]: libpod-a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c.scope: Deactivated successfully.
Nov 29 05:30:00 compute-0 podman[256839]: 2025-11-29 05:30:00.850772026 +0000 UTC m=+1.258375310 container died a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:30:00 compute-0 systemd[1]: libpod-a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c.scope: Consumed 1.034s CPU time.
Nov 29 05:30:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9ed8932adaac6a5785b8af0858987333cae69acf04cfc927f600c9c62e7e7d2-merged.mount: Deactivated successfully.
Nov 29 05:30:00 compute-0 podman[256839]: 2025-11-29 05:30:00.907504605 +0000 UTC m=+1.315107869 container remove a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:30:00 compute-0 systemd[1]: libpod-conmon-a1e4cae2399c1b889cb2b009af2ae68d48f3afe3bcf894649df281c1ca8c409c.scope: Deactivated successfully.
Nov 29 05:30:00 compute-0 sshd-session[256856]: Invalid user seafile from 120.48.175.69 port 49836
Nov 29 05:30:00 compute-0 sudo[256733]: pam_unix(sudo:session): session closed for user root
Nov 29 05:30:01 compute-0 sudo[256899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:30:01 compute-0 sudo[256899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:30:01 compute-0 sudo[256899]: pam_unix(sudo:session): session closed for user root
Nov 29 05:30:01 compute-0 sudo[256924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:30:01 compute-0 sudo[256924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:30:01 compute-0 sudo[256924]: pam_unix(sudo:session): session closed for user root
Nov 29 05:30:01 compute-0 sshd-session[256856]: Received disconnect from 120.48.175.69 port 49836:11: Bye Bye [preauth]
Nov 29 05:30:01 compute-0 sshd-session[256856]: Disconnected from invalid user seafile 120.48.175.69 port 49836 [preauth]
Nov 29 05:30:01 compute-0 sudo[256949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:30:01 compute-0 sudo[256949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:30:01 compute-0 sudo[256949]: pam_unix(sudo:session): session closed for user root
Nov 29 05:30:01 compute-0 sudo[256974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:30:01 compute-0 sudo[256974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:30:01 compute-0 podman[257040]: 2025-11-29 05:30:01.694282496 +0000 UTC m=+0.052825279 container create 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:30:01 compute-0 systemd[1]: Started libpod-conmon-64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1.scope.
Nov 29 05:30:01 compute-0 podman[257040]: 2025-11-29 05:30:01.667709628 +0000 UTC m=+0.026252501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:30:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:30:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:01 compute-0 podman[257040]: 2025-11-29 05:30:01.783312577 +0000 UTC m=+0.141855450 container init 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:30:01 compute-0 podman[257040]: 2025-11-29 05:30:01.790703219 +0000 UTC m=+0.149246002 container start 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:30:01 compute-0 podman[257040]: 2025-11-29 05:30:01.794040517 +0000 UTC m=+0.152583320 container attach 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:30:01 compute-0 admiring_dirac[257057]: 167 167
Nov 29 05:30:01 compute-0 systemd[1]: libpod-64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1.scope: Deactivated successfully.
Nov 29 05:30:01 compute-0 podman[257040]: 2025-11-29 05:30:01.798382848 +0000 UTC m=+0.156925631 container died 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 05:30:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-227e6aacd3dd4d384ad6b1467b4d877d4f12c658863fc127b8e11bae803f2b8c-merged.mount: Deactivated successfully.
Nov 29 05:30:01 compute-0 podman[257040]: 2025-11-29 05:30:01.832669175 +0000 UTC m=+0.191211968 container remove 64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:30:01 compute-0 systemd[1]: libpod-conmon-64cfa1d55a850eeaab29f5bced4641cd01b0d6bf41095f3a4ca6d0f0388881a1.scope: Deactivated successfully.
Nov 29 05:30:02 compute-0 podman[257081]: 2025-11-29 05:30:02.034243263 +0000 UTC m=+0.059684818 container create 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 05:30:02 compute-0 systemd[1]: Started libpod-conmon-62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619.scope.
Nov 29 05:30:02 compute-0 podman[257081]: 2025-11-29 05:30:02.014996766 +0000 UTC m=+0.040438361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:30:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a484154edee5d095e848e59b2e48348eaf0f5b92881a9a907030a35450f851a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a484154edee5d095e848e59b2e48348eaf0f5b92881a9a907030a35450f851a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a484154edee5d095e848e59b2e48348eaf0f5b92881a9a907030a35450f851a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:30:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a484154edee5d095e848e59b2e48348eaf0f5b92881a9a907030a35450f851a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:30:02 compute-0 podman[257081]: 2025-11-29 05:30:02.135192911 +0000 UTC m=+0.160634466 container init 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:30:02 compute-0 podman[257081]: 2025-11-29 05:30:02.146563976 +0000 UTC m=+0.172005531 container start 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 05:30:02 compute-0 podman[257081]: 2025-11-29 05:30:02.150339324 +0000 UTC m=+0.175780919 container attach 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 05:30:02 compute-0 ceph-mon[75176]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:02 compute-0 nifty_hawking[257098]: {
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:     "0": [
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:         {
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "devices": [
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "/dev/loop3"
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             ],
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_name": "ceph_lv0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_size": "21470642176",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "name": "ceph_lv0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "tags": {
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.cluster_name": "ceph",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.crush_device_class": "",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.encrypted": "0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.osd_id": "0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.type": "block",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.vdo": "0"
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             },
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "type": "block",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "vg_name": "ceph_vg0"
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:         }
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:     ],
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:     "1": [
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:         {
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "devices": [
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "/dev/loop4"
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             ],
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_name": "ceph_lv1",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_size": "21470642176",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "name": "ceph_lv1",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "tags": {
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.cluster_name": "ceph",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.crush_device_class": "",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.encrypted": "0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.osd_id": "1",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.type": "block",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.vdo": "0"
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             },
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "type": "block",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "vg_name": "ceph_vg1"
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:         }
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:     ],
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:     "2": [
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:         {
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "devices": [
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "/dev/loop5"
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             ],
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_name": "ceph_lv2",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_size": "21470642176",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "name": "ceph_lv2",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "tags": {
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.cluster_name": "ceph",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.crush_device_class": "",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.encrypted": "0",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.osd_id": "2",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.type": "block",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:                 "ceph.vdo": "0"
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             },
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "type": "block",
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:             "vg_name": "ceph_vg2"
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:         }
Nov 29 05:30:02 compute-0 nifty_hawking[257098]:     ]
Nov 29 05:30:02 compute-0 nifty_hawking[257098]: }
Nov 29 05:30:02 compute-0 systemd[1]: libpod-62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619.scope: Deactivated successfully.
Nov 29 05:30:02 compute-0 conmon[257098]: conmon 62e856d2f885a439c598 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619.scope/container/memory.events
Nov 29 05:30:02 compute-0 podman[257081]: 2025-11-29 05:30:02.92275501 +0000 UTC m=+0.948196565 container died 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:30:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a484154edee5d095e848e59b2e48348eaf0f5b92881a9a907030a35450f851a-merged.mount: Deactivated successfully.
Nov 29 05:30:02 compute-0 podman[257081]: 2025-11-29 05:30:02.982116491 +0000 UTC m=+1.007558056 container remove 62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hawking, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:30:02 compute-0 systemd[1]: libpod-conmon-62e856d2f885a439c598d6d134ee59c9967d80d264031ec67d1d1a0beefda619.scope: Deactivated successfully.
Nov 29 05:30:03 compute-0 sudo[256974]: pam_unix(sudo:session): session closed for user root
Nov 29 05:30:03 compute-0 sudo[257117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:30:03 compute-0 sudo[257117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:30:03 compute-0 sudo[257117]: pam_unix(sudo:session): session closed for user root
Nov 29 05:30:03 compute-0 sudo[257142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:30:03 compute-0 sudo[257142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:30:03 compute-0 sudo[257142]: pam_unix(sudo:session): session closed for user root
Nov 29 05:30:03 compute-0 sudo[257167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:30:03 compute-0 sudo[257167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:30:03 compute-0 sudo[257167]: pam_unix(sudo:session): session closed for user root
Nov 29 05:30:03 compute-0 sudo[257192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:30:03 compute-0 sudo[257192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:30:03 compute-0 podman[257258]: 2025-11-29 05:30:03.627661826 +0000 UTC m=+0.039984300 container create 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:30:03 compute-0 systemd[1]: Started libpod-conmon-661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f.scope.
Nov 29 05:30:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:30:03 compute-0 podman[257258]: 2025-11-29 05:30:03.699507438 +0000 UTC m=+0.111829932 container init 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:30:03 compute-0 podman[257258]: 2025-11-29 05:30:03.61145893 +0000 UTC m=+0.023781424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:30:03 compute-0 podman[257258]: 2025-11-29 05:30:03.705858365 +0000 UTC m=+0.118180839 container start 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:30:03 compute-0 podman[257258]: 2025-11-29 05:30:03.708941637 +0000 UTC m=+0.121264111 container attach 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:30:03 compute-0 silly_euler[257274]: 167 167
Nov 29 05:30:03 compute-0 systemd[1]: libpod-661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f.scope: Deactivated successfully.
Nov 29 05:30:03 compute-0 podman[257258]: 2025-11-29 05:30:03.711188919 +0000 UTC m=+0.123511393 container died 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:30:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4c54abd23e245d0e2a6002b36b6ec6105f3c9b6e3a16f86c762ba8228902301-merged.mount: Deactivated successfully.
Nov 29 05:30:03 compute-0 podman[257258]: 2025-11-29 05:30:03.740760387 +0000 UTC m=+0.153082861 container remove 661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_euler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:30:03 compute-0 systemd[1]: libpod-conmon-661a841f4458870618f963882131f1ab7ac18301708f7cfe9b342e6f78a0652f.scope: Deactivated successfully.
Nov 29 05:30:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:03 compute-0 podman[257299]: 2025-11-29 05:30:03.872373018 +0000 UTC m=+0.030096261 container create 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:30:03 compute-0 systemd[1]: Started libpod-conmon-139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6.scope.
Nov 29 05:30:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89688d4c98f8ecd694e6dc8beb07925d22dcbeeaf9da50732c0538eaadb02c24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89688d4c98f8ecd694e6dc8beb07925d22dcbeeaf9da50732c0538eaadb02c24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89688d4c98f8ecd694e6dc8beb07925d22dcbeeaf9da50732c0538eaadb02c24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:30:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89688d4c98f8ecd694e6dc8beb07925d22dcbeeaf9da50732c0538eaadb02c24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:30:03 compute-0 podman[257299]: 2025-11-29 05:30:03.926892736 +0000 UTC m=+0.084615999 container init 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:30:03 compute-0 podman[257299]: 2025-11-29 05:30:03.932889886 +0000 UTC m=+0.090613129 container start 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:30:03 compute-0 podman[257299]: 2025-11-29 05:30:03.935843304 +0000 UTC m=+0.093566547 container attach 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 05:30:03 compute-0 podman[257299]: 2025-11-29 05:30:03.859419997 +0000 UTC m=+0.017143260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]: {
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "osd_id": 0,
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "type": "bluestore"
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:     },
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "osd_id": 1,
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "type": "bluestore"
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:     },
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "osd_id": 2,
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:         "type": "bluestore"
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]:     }
Nov 29 05:30:04 compute-0 upbeat_keldysh[257316]: }
Nov 29 05:30:04 compute-0 ceph-mon[75176]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:04 compute-0 systemd[1]: libpod-139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6.scope: Deactivated successfully.
Nov 29 05:30:04 compute-0 podman[257299]: 2025-11-29 05:30:04.847709885 +0000 UTC m=+1.005433128 container died 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 05:30:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-89688d4c98f8ecd694e6dc8beb07925d22dcbeeaf9da50732c0538eaadb02c24-merged.mount: Deactivated successfully.
Nov 29 05:30:04 compute-0 podman[257299]: 2025-11-29 05:30:04.908954199 +0000 UTC m=+1.066677452 container remove 139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:30:04 compute-0 systemd[1]: libpod-conmon-139b2d34b7ba30f1e1c7a7e80dc926b81e15597c1e49322cc1bdef7ba2f930d6.scope: Deactivated successfully.
Nov 29 05:30:04 compute-0 sudo[257192]: pam_unix(sudo:session): session closed for user root
Nov 29 05:30:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:30:04 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:30:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:30:04 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:30:04 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev f0e72c40-9a88-4257-89d0-a12d3963f939 does not exist
Nov 29 05:30:04 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 9da554f6-cd45-4b4e-9f12-932cb921bf97 does not exist
Nov 29 05:30:05 compute-0 sudo[257363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:30:05 compute-0 sudo[257363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:30:05 compute-0 sudo[257363]: pam_unix(sudo:session): session closed for user root
Nov 29 05:30:05 compute-0 sudo[257388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:30:05 compute-0 sudo[257388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:30:05 compute-0 sudo[257388]: pam_unix(sudo:session): session closed for user root
Nov 29 05:30:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:30:05 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:30:06 compute-0 ceph-mon[75176]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:07 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 29 05:30:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:07.985555) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:30:07 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 29 05:30:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394207985615, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1497, "num_deletes": 251, "total_data_size": 2371344, "memory_usage": 2410304, "flush_reason": "Manual Compaction"}
Nov 29 05:30:07 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394208009876, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2316672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14827, "largest_seqno": 16323, "table_properties": {"data_size": 2309752, "index_size": 3991, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14339, "raw_average_key_size": 19, "raw_value_size": 2295825, "raw_average_value_size": 3157, "num_data_blocks": 183, "num_entries": 727, "num_filter_entries": 727, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394054, "oldest_key_time": 1764394054, "file_creation_time": 1764394207, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 24349 microseconds, and 11891 cpu microseconds.
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.009915) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2316672 bytes OK
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.009934) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.011933) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.012003) EVENT_LOG_v1 {"time_micros": 1764394208011989, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.012039) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2364766, prev total WAL file size 2364766, number of live WAL files 2.
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.013540) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2262KB)], [35(6993KB)]
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394208013598, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9477879, "oldest_snapshot_seqno": -1}
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3996 keys, 7692678 bytes, temperature: kUnknown
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394208063776, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7692678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7663763, "index_size": 17797, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97600, "raw_average_key_size": 24, "raw_value_size": 7589305, "raw_average_value_size": 1899, "num_data_blocks": 754, "num_entries": 3996, "num_filter_entries": 3996, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.064147) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7692678 bytes
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.065612) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.4 rd, 152.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.8 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4510, records dropped: 514 output_compression: NoCompression
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.065648) EVENT_LOG_v1 {"time_micros": 1764394208065636, "job": 16, "event": "compaction_finished", "compaction_time_micros": 50305, "compaction_time_cpu_micros": 21340, "output_level": 6, "num_output_files": 1, "total_output_size": 7692678, "num_input_records": 4510, "num_output_records": 3996, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394208066121, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394208067430, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.013429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.067546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.067555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.067559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.067563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:30:08 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:30:08.067566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:30:08 compute-0 ceph-mon[75176]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:09 compute-0 podman[257413]: 2025-11-29 05:30:09.019790477 +0000 UTC m=+0.067622274 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, config_id=multipathd)
Nov 29 05:30:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:10 compute-0 ceph-mon[75176]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:30:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:30:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:30:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:30:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:30:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:30:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:13 compute-0 ceph-mon[75176]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:30:13.741 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:30:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:30:13.741 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:30:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:30:13.742 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:30:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:14 compute-0 podman[257434]: 2025-11-29 05:30:14.114229673 +0000 UTC m=+0.150380420 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 05:30:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:30:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/630534927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:30:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:30:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/630534927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:30:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:15 compute-0 ceph-mon[75176]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/630534927' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:30:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/630534927' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:30:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:17 compute-0 ceph-mon[75176]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:19 compute-0 ceph-mon[75176]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:21 compute-0 ceph-mon[75176]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:21 compute-0 sshd-session[257460]: Received disconnect from 120.48.175.69 port 53816:11: Bye Bye [preauth]
Nov 29 05:30:21 compute-0 sshd-session[257460]: Disconnected from authenticating user root 120.48.175.69 port 53816 [preauth]
Nov 29 05:30:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:22 compute-0 podman[257462]: 2025-11-29 05:30:22.050411479 +0000 UTC m=+0.087410524 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:30:23 compute-0 ceph-mon[75176]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:25 compute-0 ceph-mon[75176]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.206 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.207 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.224 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.225 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.225 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.236 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.236 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.237 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.237 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.237 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.989 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.990 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.990 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.991 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:30:26 compute-0 nova_compute[254898]: 2025-11-29 05:30:26.991 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:30:27 compute-0 ceph-mon[75176]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:30:27 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121882001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:30:27 compute-0 nova_compute[254898]: 2025-11-29 05:30:27.478 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:30:27 compute-0 nova_compute[254898]: 2025-11-29 05:30:27.667 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:30:27 compute-0 nova_compute[254898]: 2025-11-29 05:30:27.668 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:30:27 compute-0 nova_compute[254898]: 2025-11-29 05:30:27.669 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:30:27 compute-0 nova_compute[254898]: 2025-11-29 05:30:27.669 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:30:27 compute-0 nova_compute[254898]: 2025-11-29 05:30:27.766 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:30:27 compute-0 nova_compute[254898]: 2025-11-29 05:30:27.766 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:30:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:27 compute-0 nova_compute[254898]: 2025-11-29 05:30:27.784 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:30:28 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1121882001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:30:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:30:28 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735413006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:30:28 compute-0 nova_compute[254898]: 2025-11-29 05:30:28.294 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:30:28 compute-0 nova_compute[254898]: 2025-11-29 05:30:28.299 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:30:28 compute-0 nova_compute[254898]: 2025-11-29 05:30:28.317 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:30:28 compute-0 nova_compute[254898]: 2025-11-29 05:30:28.319 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:30:28 compute-0 nova_compute[254898]: 2025-11-29 05:30:28.320 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:30:29 compute-0 ceph-mon[75176]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:29 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3735413006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:30:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:30 compute-0 ceph-mon[75176]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:31 compute-0 sshd-session[257526]: Invalid user admin1 from 152.32.145.111 port 50506
Nov 29 05:30:32 compute-0 sshd-session[257526]: Received disconnect from 152.32.145.111 port 50506:11: Bye Bye [preauth]
Nov 29 05:30:32 compute-0 sshd-session[257526]: Disconnected from invalid user admin1 152.32.145.111 port 50506 [preauth]
Nov 29 05:30:32 compute-0 ceph-mon[75176]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:34 compute-0 ceph-mon[75176]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:36 compute-0 ceph-mon[75176]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:38 compute-0 sshd-session[257528]: Invalid user postgres from 120.48.175.69 port 57742
Nov 29 05:30:38 compute-0 sshd-session[257528]: Received disconnect from 120.48.175.69 port 57742:11: Bye Bye [preauth]
Nov 29 05:30:38 compute-0 sshd-session[257528]: Disconnected from invalid user postgres 120.48.175.69 port 57742 [preauth]
Nov 29 05:30:38 compute-0 ceph-mon[75176]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:39 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:30:39.205 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:30:39 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:30:39.205 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 05:30:39 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:30:39.207 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:30:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:40 compute-0 podman[257530]: 2025-11-29 05:30:40.040429005 +0000 UTC m=+0.083832621 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 05:30:40 compute-0 ceph-mon[75176]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:30:41
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root']
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:30:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:42 compute-0 ceph-mon[75176]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:44 compute-0 ceph-mon[75176]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:45 compute-0 podman[257550]: 2025-11-29 05:30:45.014982322 +0000 UTC m=+0.072568039 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 05:30:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:46 compute-0 ceph-mon[75176]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:48 compute-0 ceph-mon[75176]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:50 compute-0 ceph-mon[75176]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:30:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:53 compute-0 podman[257576]: 2025-11-29 05:30:53.004907656 +0000 UTC m=+0.050979256 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 05:30:53 compute-0 ceph-mon[75176]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:30:55 compute-0 ceph-mon[75176]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:55 compute-0 sshd-session[257595]: Invalid user vtatis from 120.48.175.69 port 33432
Nov 29 05:30:55 compute-0 sshd-session[257595]: Received disconnect from 120.48.175.69 port 33432:11: Bye Bye [preauth]
Nov 29 05:30:55 compute-0 sshd-session[257595]: Disconnected from invalid user vtatis 120.48.175.69 port 33432 [preauth]
Nov 29 05:30:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:57 compute-0 ceph-mon[75176]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:30:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:59 compute-0 ceph-mon[75176]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:30:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:01 compute-0 ceph-mon[75176]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:03 compute-0 ceph-mon[75176]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:05 compute-0 ceph-mon[75176]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:05 compute-0 sudo[257597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:31:05 compute-0 sudo[257597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:05 compute-0 sudo[257597]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:05 compute-0 sudo[257622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:31:05 compute-0 sudo[257622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:05 compute-0 sudo[257622]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:05 compute-0 sudo[257647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:31:05 compute-0 sudo[257647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:05 compute-0 sudo[257647]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:05 compute-0 sudo[257672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:31:05 compute-0 sudo[257672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:05 compute-0 podman[257770]: 2025-11-29 05:31:05.854731458 +0000 UTC m=+0.055931864 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:31:05 compute-0 podman[257770]: 2025-11-29 05:31:05.959914523 +0000 UTC m=+0.161115009 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:31:06 compute-0 sudo[257672]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:31:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:31:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:31:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:31:06 compute-0 sshd-session[257725]: Invalid user admin1 from 45.120.216.232 port 40984
Nov 29 05:31:06 compute-0 sudo[257926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:31:06 compute-0 sudo[257926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:06 compute-0 sudo[257926]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:06 compute-0 sudo[257951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:31:06 compute-0 sudo[257951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:06 compute-0 sudo[257951]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:06 compute-0 sudo[257976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:31:06 compute-0 sudo[257976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:06 compute-0 sudo[257976]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:06 compute-0 sshd-session[257725]: Received disconnect from 45.120.216.232 port 40984:11: Bye Bye [preauth]
Nov 29 05:31:06 compute-0 sshd-session[257725]: Disconnected from invalid user admin1 45.120.216.232 port 40984 [preauth]
Nov 29 05:31:07 compute-0 sudo[258001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:31:07 compute-0 sudo[258001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:07 compute-0 ceph-mon[75176]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:31:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:31:07 compute-0 sudo[258001]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 05:31:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:31:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:31:07 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:31:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:31:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:31:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:31:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:31:07 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev c2aac0cd-a40a-4fc6-9e67-ac6c3c0d7731 does not exist
Nov 29 05:31:07 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 1871e192-ed9d-4549-8627-7d33a399a5b6 does not exist
Nov 29 05:31:07 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 6871a3b8-082b-44d9-9b8f-7c19e53de6be does not exist
Nov 29 05:31:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:31:07 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:31:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:31:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:31:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:31:07 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:31:07 compute-0 sudo[258057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:31:07 compute-0 sudo[258057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:07 compute-0 sudo[258057]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:07 compute-0 sudo[258082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:31:07 compute-0 sudo[258082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:07 compute-0 sudo[258082]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:07 compute-0 sudo[258107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:31:07 compute-0 sudo[258107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:07 compute-0 sudo[258107]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:07 compute-0 sudo[258132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:31:07 compute-0 sudo[258132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:31:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:31:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:31:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:31:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:31:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:31:08 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:31:08 compute-0 podman[258197]: 2025-11-29 05:31:08.384209209 +0000 UTC m=+0.065415003 container create 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:31:08 compute-0 systemd[1]: Started libpod-conmon-6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955.scope.
Nov 29 05:31:08 compute-0 podman[258197]: 2025-11-29 05:31:08.356033468 +0000 UTC m=+0.037239292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:31:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:31:08 compute-0 podman[258197]: 2025-11-29 05:31:08.476890762 +0000 UTC m=+0.158096526 container init 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:31:08 compute-0 podman[258197]: 2025-11-29 05:31:08.489068687 +0000 UTC m=+0.170274471 container start 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 05:31:08 compute-0 podman[258197]: 2025-11-29 05:31:08.49250122 +0000 UTC m=+0.173706994 container attach 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:31:08 compute-0 vigilant_goldwasser[258213]: 167 167
Nov 29 05:31:08 compute-0 systemd[1]: libpod-6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955.scope: Deactivated successfully.
Nov 29 05:31:08 compute-0 podman[258197]: 2025-11-29 05:31:08.498086115 +0000 UTC m=+0.179291889 container died 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:31:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-efd1d9689d6710bced8604af73539b3e3079ec30e0bb10ddde31b3e5c21b4a74-merged.mount: Deactivated successfully.
Nov 29 05:31:08 compute-0 podman[258197]: 2025-11-29 05:31:08.549371656 +0000 UTC m=+0.230577400 container remove 6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_goldwasser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:31:08 compute-0 systemd[1]: libpod-conmon-6cca57ee8bbf7042b041436576073c06c821dce57ddf3904d192a4ba77dd8955.scope: Deactivated successfully.
Nov 29 05:31:08 compute-0 podman[258238]: 2025-11-29 05:31:08.828254983 +0000 UTC m=+0.074691068 container create 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 29 05:31:08 compute-0 systemd[1]: Started libpod-conmon-46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad.scope.
Nov 29 05:31:08 compute-0 podman[258238]: 2025-11-29 05:31:08.797494069 +0000 UTC m=+0.043930164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:31:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:08 compute-0 podman[258238]: 2025-11-29 05:31:08.950961222 +0000 UTC m=+0.197397277 container init 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 05:31:08 compute-0 podman[258238]: 2025-11-29 05:31:08.968345152 +0000 UTC m=+0.214781207 container start 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 05:31:08 compute-0 podman[258238]: 2025-11-29 05:31:08.973343654 +0000 UTC m=+0.219779799 container attach 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:31:09 compute-0 ceph-mon[75176]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:10 compute-0 eloquent_hertz[258255]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:31:10 compute-0 eloquent_hertz[258255]: --> relative data size: 1.0
Nov 29 05:31:10 compute-0 eloquent_hertz[258255]: --> All data devices are unavailable
Nov 29 05:31:10 compute-0 systemd[1]: libpod-46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad.scope: Deactivated successfully.
Nov 29 05:31:10 compute-0 podman[258238]: 2025-11-29 05:31:10.102401862 +0000 UTC m=+1.348837917 container died 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 05:31:10 compute-0 systemd[1]: libpod-46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad.scope: Consumed 1.087s CPU time.
Nov 29 05:31:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea6c361273bd8c133c2c96d8808aa66c1e44db20343ce2854c6bf9642c457bb8-merged.mount: Deactivated successfully.
Nov 29 05:31:10 compute-0 podman[258238]: 2025-11-29 05:31:10.184613291 +0000 UTC m=+1.431049366 container remove 46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 05:31:10 compute-0 systemd[1]: libpod-conmon-46158fcc5acebd3b4b6da06a4a2eace7531e387533d3fce97f1ab94723afd5ad.scope: Deactivated successfully.
Nov 29 05:31:10 compute-0 sudo[258132]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:10 compute-0 podman[258285]: 2025-11-29 05:31:10.238120626 +0000 UTC m=+0.096191158 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 05:31:10 compute-0 sudo[258315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:31:10 compute-0 sudo[258315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:10 compute-0 sudo[258315]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:10 compute-0 sudo[258340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:31:10 compute-0 sudo[258340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:10 compute-0 sudo[258340]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:10 compute-0 sudo[258365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:31:10 compute-0 sudo[258365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:10 compute-0 sudo[258365]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:10 compute-0 sudo[258390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:31:10 compute-0 sudo[258390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:11 compute-0 podman[258455]: 2025-11-29 05:31:11.089708461 +0000 UTC m=+0.066163493 container create 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:31:11 compute-0 ceph-mon[75176]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:11 compute-0 podman[258455]: 2025-11-29 05:31:11.058192238 +0000 UTC m=+0.034647280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:31:11 compute-0 systemd[1]: Started libpod-conmon-9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe.scope.
Nov 29 05:31:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:31:11 compute-0 podman[258455]: 2025-11-29 05:31:11.208405222 +0000 UTC m=+0.184860234 container init 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:31:11 compute-0 podman[258455]: 2025-11-29 05:31:11.220989057 +0000 UTC m=+0.197444089 container start 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:31:11 compute-0 podman[258455]: 2025-11-29 05:31:11.225761983 +0000 UTC m=+0.202217005 container attach 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:31:11 compute-0 gifted_carver[258472]: 167 167
Nov 29 05:31:11 compute-0 systemd[1]: libpod-9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe.scope: Deactivated successfully.
Nov 29 05:31:11 compute-0 podman[258455]: 2025-11-29 05:31:11.233159591 +0000 UTC m=+0.209614613 container died 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:31:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba99f1919b6833fc97561f031767b340687fa1a5195e13cf91bcf9013e097bf4-merged.mount: Deactivated successfully.
Nov 29 05:31:11 compute-0 podman[258455]: 2025-11-29 05:31:11.281318226 +0000 UTC m=+0.257773258 container remove 9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:31:11 compute-0 systemd[1]: libpod-conmon-9c842b68cad68d388ea1a900ecd27081ead5d937665823bf81cd2f7ecf7125fe.scope: Deactivated successfully.
Nov 29 05:31:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:31:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:31:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:31:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:31:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:31:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:31:11 compute-0 podman[258496]: 2025-11-29 05:31:11.503654506 +0000 UTC m=+0.059118431 container create 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 05:31:11 compute-0 systemd[1]: Started libpod-conmon-4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792.scope.
Nov 29 05:31:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc9485d623d3e0af57ca5ee0af2c5a274147f06f6330d82489d989922f9a61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc9485d623d3e0af57ca5ee0af2c5a274147f06f6330d82489d989922f9a61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc9485d623d3e0af57ca5ee0af2c5a274147f06f6330d82489d989922f9a61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79cc9485d623d3e0af57ca5ee0af2c5a274147f06f6330d82489d989922f9a61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:11 compute-0 podman[258496]: 2025-11-29 05:31:11.485980139 +0000 UTC m=+0.041444084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:31:11 compute-0 podman[258496]: 2025-11-29 05:31:11.58774653 +0000 UTC m=+0.143210475 container init 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:31:11 compute-0 podman[258496]: 2025-11-29 05:31:11.593936901 +0000 UTC m=+0.149400826 container start 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:31:11 compute-0 podman[258496]: 2025-11-29 05:31:11.597820564 +0000 UTC m=+0.153284489 container attach 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:31:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:12 compute-0 ceph-mon[75176]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:12 compute-0 zen_swartz[258512]: {
Nov 29 05:31:12 compute-0 zen_swartz[258512]:     "0": [
Nov 29 05:31:12 compute-0 zen_swartz[258512]:         {
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "devices": [
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "/dev/loop3"
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             ],
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_name": "ceph_lv0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_size": "21470642176",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "name": "ceph_lv0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "tags": {
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.cluster_name": "ceph",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.crush_device_class": "",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.encrypted": "0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.osd_id": "0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.type": "block",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.vdo": "0"
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             },
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "type": "block",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "vg_name": "ceph_vg0"
Nov 29 05:31:12 compute-0 zen_swartz[258512]:         }
Nov 29 05:31:12 compute-0 zen_swartz[258512]:     ],
Nov 29 05:31:12 compute-0 zen_swartz[258512]:     "1": [
Nov 29 05:31:12 compute-0 zen_swartz[258512]:         {
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "devices": [
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "/dev/loop4"
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             ],
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_name": "ceph_lv1",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_size": "21470642176",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "name": "ceph_lv1",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "tags": {
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.cluster_name": "ceph",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.crush_device_class": "",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.encrypted": "0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.osd_id": "1",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.type": "block",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.vdo": "0"
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             },
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "type": "block",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "vg_name": "ceph_vg1"
Nov 29 05:31:12 compute-0 zen_swartz[258512]:         }
Nov 29 05:31:12 compute-0 zen_swartz[258512]:     ],
Nov 29 05:31:12 compute-0 zen_swartz[258512]:     "2": [
Nov 29 05:31:12 compute-0 zen_swartz[258512]:         {
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "devices": [
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "/dev/loop5"
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             ],
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_name": "ceph_lv2",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_size": "21470642176",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "name": "ceph_lv2",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "tags": {
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.cluster_name": "ceph",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.crush_device_class": "",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.encrypted": "0",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.osd_id": "2",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.type": "block",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:                 "ceph.vdo": "0"
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             },
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "type": "block",
Nov 29 05:31:12 compute-0 zen_swartz[258512]:             "vg_name": "ceph_vg2"
Nov 29 05:31:12 compute-0 zen_swartz[258512]:         }
Nov 29 05:31:12 compute-0 zen_swartz[258512]:     ]
Nov 29 05:31:12 compute-0 zen_swartz[258512]: }
Nov 29 05:31:12 compute-0 systemd[1]: libpod-4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792.scope: Deactivated successfully.
Nov 29 05:31:12 compute-0 podman[258496]: 2025-11-29 05:31:12.32957758 +0000 UTC m=+0.885041535 container died 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:31:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-79cc9485d623d3e0af57ca5ee0af2c5a274147f06f6330d82489d989922f9a61-merged.mount: Deactivated successfully.
Nov 29 05:31:12 compute-0 podman[258496]: 2025-11-29 05:31:12.389972551 +0000 UTC m=+0.945436506 container remove 4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_swartz, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:31:12 compute-0 systemd[1]: libpod-conmon-4db0d3addad0b6e32759c25ca6d1b902608ae3f567d2cd108afa59530500d792.scope: Deactivated successfully.
Nov 29 05:31:12 compute-0 sudo[258390]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:12 compute-0 sudo[258537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:31:12 compute-0 sudo[258537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:12 compute-0 sudo[258537]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:12 compute-0 sudo[258562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:31:12 compute-0 sudo[258562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:12 compute-0 sudo[258562]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:12 compute-0 sudo[258587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:31:12 compute-0 sudo[258587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:12 compute-0 sudo[258587]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:12 compute-0 sudo[258612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:31:12 compute-0 sudo[258612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:13 compute-0 sshd-session[258517]: Invalid user developer from 120.48.175.69 port 37476
Nov 29 05:31:13 compute-0 podman[258677]: 2025-11-29 05:31:13.109205533 +0000 UTC m=+0.069149403 container create 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:31:13 compute-0 systemd[1]: Started libpod-conmon-5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6.scope.
Nov 29 05:31:13 compute-0 podman[258677]: 2025-11-29 05:31:13.06522677 +0000 UTC m=+0.025170670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:31:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:31:13 compute-0 podman[258677]: 2025-11-29 05:31:13.212695667 +0000 UTC m=+0.172639557 container init 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 05:31:13 compute-0 podman[258677]: 2025-11-29 05:31:13.219581004 +0000 UTC m=+0.179524874 container start 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 05:31:13 compute-0 awesome_chebyshev[258694]: 167 167
Nov 29 05:31:13 compute-0 systemd[1]: libpod-5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6.scope: Deactivated successfully.
Nov 29 05:31:13 compute-0 sshd-session[258517]: Received disconnect from 120.48.175.69 port 37476:11: Bye Bye [preauth]
Nov 29 05:31:13 compute-0 sshd-session[258517]: Disconnected from invalid user developer 120.48.175.69 port 37476 [preauth]
Nov 29 05:31:13 compute-0 podman[258677]: 2025-11-29 05:31:13.245633614 +0000 UTC m=+0.205577514 container attach 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:31:13 compute-0 podman[258677]: 2025-11-29 05:31:13.246539356 +0000 UTC m=+0.206483226 container died 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-63d0ca1d97ffac121f4dab66a0b5159fcb49967a877c3fd09f0f2601f8f7f5e5-merged.mount: Deactivated successfully.
Nov 29 05:31:13 compute-0 podman[258677]: 2025-11-29 05:31:13.421637812 +0000 UTC m=+0.381581692 container remove 5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:31:13 compute-0 systemd[1]: libpod-conmon-5420c79bb6c7efd1725f617acb364cc72fc4c9011a956bb1d0d9e8122e298fc6.scope: Deactivated successfully.
Nov 29 05:31:13 compute-0 podman[258719]: 2025-11-29 05:31:13.63270173 +0000 UTC m=+0.052714217 container create dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:31:13 compute-0 systemd[1]: Started libpod-conmon-dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3.scope.
Nov 29 05:31:13 compute-0 podman[258719]: 2025-11-29 05:31:13.601006512 +0000 UTC m=+0.021018979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:31:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eee3ad71c9e4df4fcddf56c98515a18b5f27504efc46ca362ba9b03f78401ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eee3ad71c9e4df4fcddf56c98515a18b5f27504efc46ca362ba9b03f78401ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eee3ad71c9e4df4fcddf56c98515a18b5f27504efc46ca362ba9b03f78401ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eee3ad71c9e4df4fcddf56c98515a18b5f27504efc46ca362ba9b03f78401ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:31:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:31:13.742 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:31:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:31:13.744 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:31:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:31:13.745 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:31:13 compute-0 podman[258719]: 2025-11-29 05:31:13.745314564 +0000 UTC m=+0.165327051 container init dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:31:13 compute-0 podman[258719]: 2025-11-29 05:31:13.756563436 +0000 UTC m=+0.176575923 container start dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 05:31:13 compute-0 podman[258719]: 2025-11-29 05:31:13.771412826 +0000 UTC m=+0.191425353 container attach dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:31:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:31:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2105041158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:31:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:31:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2105041158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:31:14 compute-0 clever_carver[258736]: {
Nov 29 05:31:14 compute-0 clever_carver[258736]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "osd_id": 0,
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "type": "bluestore"
Nov 29 05:31:14 compute-0 clever_carver[258736]:     },
Nov 29 05:31:14 compute-0 clever_carver[258736]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "osd_id": 1,
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "type": "bluestore"
Nov 29 05:31:14 compute-0 clever_carver[258736]:     },
Nov 29 05:31:14 compute-0 clever_carver[258736]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "osd_id": 2,
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:31:14 compute-0 clever_carver[258736]:         "type": "bluestore"
Nov 29 05:31:14 compute-0 clever_carver[258736]:     }
Nov 29 05:31:14 compute-0 clever_carver[258736]: }
Nov 29 05:31:14 compute-0 systemd[1]: libpod-dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3.scope: Deactivated successfully.
Nov 29 05:31:14 compute-0 systemd[1]: libpod-dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3.scope: Consumed 1.012s CPU time.
Nov 29 05:31:14 compute-0 podman[258769]: 2025-11-29 05:31:14.808413427 +0000 UTC m=+0.023910461 container died dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:31:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6eee3ad71c9e4df4fcddf56c98515a18b5f27504efc46ca362ba9b03f78401ac-merged.mount: Deactivated successfully.
Nov 29 05:31:15 compute-0 ceph-mon[75176]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/2105041158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:31:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/2105041158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:31:15 compute-0 podman[258769]: 2025-11-29 05:31:15.191049464 +0000 UTC m=+0.406546478 container remove dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:31:15 compute-0 systemd[1]: libpod-conmon-dbe681e0532622acf5c46731f00903500971a125d03f0f684e234cf7df9ffaf3.scope: Deactivated successfully.
Nov 29 05:31:15 compute-0 sudo[258612]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:15 compute-0 podman[258784]: 2025-11-29 05:31:15.223030428 +0000 UTC m=+0.135438728 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 05:31:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:31:15 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:31:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:31:15 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:31:15 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 7efd5189-295a-4b95-b16f-43c68a2520aa does not exist
Nov 29 05:31:15 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 2f481f00-c95c-40ed-9e84-ee471495af00 does not exist
Nov 29 05:31:15 compute-0 sudo[258811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:31:15 compute-0 sudo[258811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:15 compute-0 sudo[258811]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:15 compute-0 sudo[258836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:31:15 compute-0 sudo[258836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:31:15 compute-0 sudo[258836]: pam_unix(sudo:session): session closed for user root
Nov 29 05:31:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:16 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:31:16 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:31:16 compute-0 ceph-mon[75176]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:18 compute-0 ceph-mon[75176]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:20 compute-0 ceph-mon[75176]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:23 compute-0 ceph-mon[75176]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:24 compute-0 podman[258863]: 2025-11-29 05:31:24.005395472 +0000 UTC m=+0.054865078 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 05:31:24 compute-0 ceph-mon[75176]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:24 compute-0 sshd-session[258861]: Invalid user work from 101.47.141.125 port 57622
Nov 29 05:31:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:26 compute-0 sshd-session[258861]: Received disconnect from 101.47.141.125 port 57622:11: Bye Bye [preauth]
Nov 29 05:31:26 compute-0 sshd-session[258861]: Disconnected from invalid user work 101.47.141.125 port 57622 [preauth]
Nov 29 05:31:26 compute-0 nova_compute[254898]: 2025-11-29 05:31:26.320 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:31:26 compute-0 nova_compute[254898]: 2025-11-29 05:31:26.321 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:31:26 compute-0 nova_compute[254898]: 2025-11-29 05:31:26.321 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:31:26 compute-0 nova_compute[254898]: 2025-11-29 05:31:26.321 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:31:26 compute-0 ceph-mon[75176]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:26 compute-0 nova_compute[254898]: 2025-11-29 05:31:26.950 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:31:26 compute-0 nova_compute[254898]: 2025-11-29 05:31:26.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:31:26 compute-0 nova_compute[254898]: 2025-11-29 05:31:26.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:31:26 compute-0 nova_compute[254898]: 2025-11-29 05:31:26.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:31:26 compute-0 nova_compute[254898]: 2025-11-29 05:31:26.985 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:31:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:27 compute-0 nova_compute[254898]: 2025-11-29 05:31:27.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:31:27 compute-0 nova_compute[254898]: 2025-11-29 05:31:27.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.004 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.004 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.004 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.005 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.005 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:31:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:31:28 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237359323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.438 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.575 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.577 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.577 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.577 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.664 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.664 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:31:28 compute-0 nova_compute[254898]: 2025-11-29 05:31:28.693 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:31:28 compute-0 ceph-mon[75176]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:28 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3237359323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:31:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:31:29 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/999455183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:31:29 compute-0 nova_compute[254898]: 2025-11-29 05:31:29.101 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:31:29 compute-0 nova_compute[254898]: 2025-11-29 05:31:29.106 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:31:29 compute-0 nova_compute[254898]: 2025-11-29 05:31:29.140 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:31:29 compute-0 nova_compute[254898]: 2025-11-29 05:31:29.142 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:31:29 compute-0 nova_compute[254898]: 2025-11-29 05:31:29.142 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:31:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:29 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/999455183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:31:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:30 compute-0 nova_compute[254898]: 2025-11-29 05:31:30.142 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:31:30 compute-0 nova_compute[254898]: 2025-11-29 05:31:30.142 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:31:30 compute-0 ceph-mon[75176]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:32 compute-0 ceph-mon[75176]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:34 compute-0 ceph-mon[75176]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:37 compute-0 ceph-mon[75176]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:39 compute-0 ceph-mon[75176]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:40 compute-0 podman[258926]: 2025-11-29 05:31:40.997169706 +0000 UTC m=+0.054295495 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 05:31:41 compute-0 ceph-mon[75176]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:31:41
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups']
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:31:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:42 compute-0 ceph-mon[75176]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:45 compute-0 ceph-mon[75176]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:46 compute-0 podman[258947]: 2025-11-29 05:31:46.035054641 +0000 UTC m=+0.091026613 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 05:31:47 compute-0 ceph-mon[75176]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:49 compute-0 ceph-mon[75176]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:51 compute-0 ceph-mon[75176]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:31:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:53 compute-0 ceph-mon[75176]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:54 compute-0 ceph-mon[75176]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:31:54 compute-0 podman[258973]: 2025-11-29 05:31:54.994395657 +0000 UTC m=+0.049533139 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 05:31:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:56 compute-0 ceph-mon[75176]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:58 compute-0 ceph-mon[75176]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:31:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:00 compute-0 ceph-mon[75176]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:02 compute-0 ceph-mon[75176]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:04 compute-0 ceph-mon[75176]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:06 compute-0 ceph-mon[75176]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:08 compute-0 sshd-session[258992]: Received disconnect from 152.32.145.111 port 60600:11: Bye Bye [preauth]
Nov 29 05:32:08 compute-0 sshd-session[258992]: Disconnected from authenticating user root 152.32.145.111 port 60600 [preauth]
Nov 29 05:32:09 compute-0 ceph-mon[75176]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:11 compute-0 ceph-mon[75176]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:32:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:32:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:32:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:32:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:32:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:32:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:12 compute-0 podman[258994]: 2025-11-29 05:32:12.047907545 +0000 UTC m=+0.091988387 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 05:32:13 compute-0 ceph-mon[75176]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:32:13.745 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:32:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:32:13.747 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:32:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:32:13.748 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:32:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:32:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/159069613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:32:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:32:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/159069613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:32:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:15 compute-0 ceph-mon[75176]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/159069613' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:32:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/159069613' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:32:15 compute-0 sudo[259017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:32:15 compute-0 sudo[259017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:15 compute-0 sudo[259017]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:15 compute-0 sudo[259042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:32:15 compute-0 sudo[259042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:15 compute-0 sudo[259042]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:15 compute-0 sudo[259067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:32:15 compute-0 sudo[259067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:15 compute-0 sudo[259067]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:15 compute-0 sudo[259092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:32:15 compute-0 sudo[259092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:16 compute-0 sudo[259092]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:32:16 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:32:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:32:16 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:32:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:32:16 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:32:16 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 442a2dd5-4db3-4139-ad69-183fbdc38442 does not exist
Nov 29 05:32:16 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 7a036b06-1f48-44f8-819c-a74a40c5b33b does not exist
Nov 29 05:32:16 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev f21b6119-759e-4ea2-9cef-385c02e5b859 does not exist
Nov 29 05:32:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:32:16 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:32:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:32:16 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:32:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:32:16 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:32:16 compute-0 sudo[259149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:32:16 compute-0 sudo[259149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:16 compute-0 sudo[259149]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:16 compute-0 sudo[259175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:32:16 compute-0 sudo[259175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:16 compute-0 sudo[259175]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:16 compute-0 podman[259173]: 2025-11-29 05:32:16.34935213 +0000 UTC m=+0.116885879 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 05:32:16 compute-0 sudo[259216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:32:16 compute-0 sudo[259216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:16 compute-0 sudo[259216]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:16 compute-0 sudo[259250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:32:16 compute-0 sudo[259250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:16 compute-0 podman[259315]: 2025-11-29 05:32:16.743170479 +0000 UTC m=+0.036454753 container create 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:32:16 compute-0 systemd[1]: Started libpod-conmon-24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c.scope.
Nov 29 05:32:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:32:16 compute-0 podman[259315]: 2025-11-29 05:32:16.72706773 +0000 UTC m=+0.020352024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:32:16 compute-0 podman[259315]: 2025-11-29 05:32:16.82505065 +0000 UTC m=+0.118334944 container init 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:32:16 compute-0 podman[259315]: 2025-11-29 05:32:16.831829535 +0000 UTC m=+0.125113799 container start 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 05:32:16 compute-0 podman[259315]: 2025-11-29 05:32:16.834468588 +0000 UTC m=+0.127752942 container attach 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:32:16 compute-0 systemd[1]: libpod-24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c.scope: Deactivated successfully.
Nov 29 05:32:16 compute-0 brave_wilson[259331]: 167 167
Nov 29 05:32:16 compute-0 conmon[259331]: conmon 24c9575a8029f14d68ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c.scope/container/memory.events
Nov 29 05:32:16 compute-0 podman[259315]: 2025-11-29 05:32:16.841947549 +0000 UTC m=+0.135231813 container died 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 05:32:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-49c3c8d8b2d6db2ae0eacb5e749098ecf53d8b9eb6fb85b91eb8b1345f978952-merged.mount: Deactivated successfully.
Nov 29 05:32:16 compute-0 podman[259315]: 2025-11-29 05:32:16.879239731 +0000 UTC m=+0.172524005 container remove 24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:32:16 compute-0 systemd[1]: libpod-conmon-24c9575a8029f14d68ad49c2642fe8e8db49c48d20c521e934b6dc1b7e77d19c.scope: Deactivated successfully.
Nov 29 05:32:17 compute-0 ceph-mon[75176]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:32:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:32:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:32:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:32:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:32:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:32:17 compute-0 podman[259356]: 2025-11-29 05:32:17.071433972 +0000 UTC m=+0.059083752 container create d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:32:17 compute-0 systemd[1]: Started libpod-conmon-d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b.scope.
Nov 29 05:32:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:17 compute-0 podman[259356]: 2025-11-29 05:32:17.051004207 +0000 UTC m=+0.038653957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:32:17 compute-0 podman[259356]: 2025-11-29 05:32:17.162362782 +0000 UTC m=+0.150012572 container init d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 05:32:17 compute-0 podman[259356]: 2025-11-29 05:32:17.175435578 +0000 UTC m=+0.163085318 container start d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:32:17 compute-0 podman[259356]: 2025-11-29 05:32:17.178728157 +0000 UTC m=+0.166377927 container attach d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:32:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:18 compute-0 crazy_bhabha[259373]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:32:18 compute-0 crazy_bhabha[259373]: --> relative data size: 1.0
Nov 29 05:32:18 compute-0 crazy_bhabha[259373]: --> All data devices are unavailable
Nov 29 05:32:18 compute-0 systemd[1]: libpod-d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b.scope: Deactivated successfully.
Nov 29 05:32:18 compute-0 podman[259356]: 2025-11-29 05:32:18.147657821 +0000 UTC m=+1.135307601 container died d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-546e3e59a574715be51d3636c42e5f0488160d55b4feca05786f7cf3f18a7f57-merged.mount: Deactivated successfully.
Nov 29 05:32:18 compute-0 podman[259356]: 2025-11-29 05:32:18.195584081 +0000 UTC m=+1.183233821 container remove d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:32:18 compute-0 systemd[1]: libpod-conmon-d5b993eb970a668a69926ab5bf3e28898b8d64e1f1e409ea0e9bbee360f9089b.scope: Deactivated successfully.
Nov 29 05:32:18 compute-0 sudo[259250]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:18 compute-0 sudo[259414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:32:18 compute-0 sudo[259414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:18 compute-0 sudo[259414]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:18 compute-0 sudo[259439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:32:18 compute-0 sudo[259439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:18 compute-0 sudo[259439]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:18 compute-0 sudo[259464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:32:18 compute-0 sudo[259464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:18 compute-0 sudo[259464]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:18 compute-0 sudo[259489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:32:18 compute-0 sudo[259489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:18 compute-0 podman[259554]: 2025-11-29 05:32:18.742023382 +0000 UTC m=+0.045156623 container create 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 05:32:18 compute-0 systemd[1]: Started libpod-conmon-5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69.scope.
Nov 29 05:32:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:32:18 compute-0 podman[259554]: 2025-11-29 05:32:18.804144115 +0000 UTC m=+0.107277386 container init 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:32:18 compute-0 podman[259554]: 2025-11-29 05:32:18.81506754 +0000 UTC m=+0.118200781 container start 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 05:32:18 compute-0 podman[259554]: 2025-11-29 05:32:18.818113293 +0000 UTC m=+0.121246554 container attach 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:32:18 compute-0 podman[259554]: 2025-11-29 05:32:18.723197496 +0000 UTC m=+0.026330797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:32:18 compute-0 pensive_carver[259570]: 167 167
Nov 29 05:32:18 compute-0 systemd[1]: libpod-5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69.scope: Deactivated successfully.
Nov 29 05:32:18 compute-0 podman[259554]: 2025-11-29 05:32:18.822407287 +0000 UTC m=+0.125540528 container died 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:32:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6d660fb10cb3624408b8744267b52a5c86210999eb2d4e0275733c3083931e5-merged.mount: Deactivated successfully.
Nov 29 05:32:18 compute-0 podman[259554]: 2025-11-29 05:32:18.849650857 +0000 UTC m=+0.152784098 container remove 5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_carver, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:32:18 compute-0 systemd[1]: libpod-conmon-5070595c0856495ca54ec9a5d3cd4312f19bbf7da71a79a04e0e69c66c36fb69.scope: Deactivated successfully.
Nov 29 05:32:19 compute-0 podman[259596]: 2025-11-29 05:32:19.018374988 +0000 UTC m=+0.039944316 container create 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:32:19 compute-0 ceph-mon[75176]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:19 compute-0 systemd[1]: Started libpod-conmon-86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef.scope.
Nov 29 05:32:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c868fa690cc233000f362b4c1dbe8d924543beb7fc0f2ba06f68524c8cc7eb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c868fa690cc233000f362b4c1dbe8d924543beb7fc0f2ba06f68524c8cc7eb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c868fa690cc233000f362b4c1dbe8d924543beb7fc0f2ba06f68524c8cc7eb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c868fa690cc233000f362b4c1dbe8d924543beb7fc0f2ba06f68524c8cc7eb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:19 compute-0 podman[259596]: 2025-11-29 05:32:18.99900388 +0000 UTC m=+0.020573178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:32:19 compute-0 podman[259596]: 2025-11-29 05:32:19.11101215 +0000 UTC m=+0.132581448 container init 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:32:19 compute-0 podman[259596]: 2025-11-29 05:32:19.120053499 +0000 UTC m=+0.141622827 container start 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:32:19 compute-0 podman[259596]: 2025-11-29 05:32:19.12508335 +0000 UTC m=+0.146652658 container attach 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:32:19 compute-0 trusting_mendel[259612]: {
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:     "0": [
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:         {
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "devices": [
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "/dev/loop3"
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             ],
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_name": "ceph_lv0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_size": "21470642176",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "name": "ceph_lv0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "tags": {
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.cluster_name": "ceph",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.crush_device_class": "",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.encrypted": "0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.osd_id": "0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.type": "block",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.vdo": "0"
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             },
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "type": "block",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "vg_name": "ceph_vg0"
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:         }
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:     ],
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:     "1": [
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:         {
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "devices": [
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "/dev/loop4"
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             ],
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_name": "ceph_lv1",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_size": "21470642176",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "name": "ceph_lv1",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "tags": {
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.cluster_name": "ceph",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.crush_device_class": "",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.encrypted": "0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.osd_id": "1",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.type": "block",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.vdo": "0"
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             },
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "type": "block",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "vg_name": "ceph_vg1"
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:         }
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:     ],
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:     "2": [
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:         {
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "devices": [
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "/dev/loop5"
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             ],
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_name": "ceph_lv2",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_size": "21470642176",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "name": "ceph_lv2",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "tags": {
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.cluster_name": "ceph",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.crush_device_class": "",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.encrypted": "0",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.osd_id": "2",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.type": "block",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:                 "ceph.vdo": "0"
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             },
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "type": "block",
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:             "vg_name": "ceph_vg2"
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:         }
Nov 29 05:32:19 compute-0 trusting_mendel[259612]:     ]
Nov 29 05:32:19 compute-0 trusting_mendel[259612]: }
Nov 29 05:32:19 compute-0 systemd[1]: libpod-86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef.scope: Deactivated successfully.
Nov 29 05:32:19 compute-0 podman[259596]: 2025-11-29 05:32:19.820609559 +0000 UTC m=+0.842178847 container died 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 05:32:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c868fa690cc233000f362b4c1dbe8d924543beb7fc0f2ba06f68524c8cc7eb6-merged.mount: Deactivated successfully.
Nov 29 05:32:19 compute-0 podman[259596]: 2025-11-29 05:32:19.879833032 +0000 UTC m=+0.901402350 container remove 86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 05:32:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:19 compute-0 systemd[1]: libpod-conmon-86993611bc98e50706d719bf20588b5aa92eb4ef1c3fa374bd06fb6c016ecaef.scope: Deactivated successfully.
Nov 29 05:32:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:19 compute-0 sudo[259489]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:19 compute-0 sudo[259634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:32:19 compute-0 sudo[259634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:19 compute-0 sudo[259634]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:20 compute-0 sudo[259659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:32:20 compute-0 sudo[259659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:20 compute-0 sudo[259659]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:20 compute-0 sudo[259684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:32:20 compute-0 sudo[259684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:20 compute-0 sudo[259684]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:20 compute-0 sudo[259709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:32:20 compute-0 sudo[259709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:20 compute-0 podman[259775]: 2025-11-29 05:32:20.554855575 +0000 UTC m=+0.044821885 container create ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:32:20 compute-0 systemd[1]: Started libpod-conmon-ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df.scope.
Nov 29 05:32:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:32:20 compute-0 podman[259775]: 2025-11-29 05:32:20.535592329 +0000 UTC m=+0.025558689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:32:20 compute-0 podman[259775]: 2025-11-29 05:32:20.631606451 +0000 UTC m=+0.121572761 container init ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:32:20 compute-0 podman[259775]: 2025-11-29 05:32:20.637054413 +0000 UTC m=+0.127020723 container start ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:32:20 compute-0 podman[259775]: 2025-11-29 05:32:20.639907522 +0000 UTC m=+0.129873832 container attach ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 05:32:20 compute-0 condescending_ganguly[259791]: 167 167
Nov 29 05:32:20 compute-0 systemd[1]: libpod-ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df.scope: Deactivated successfully.
Nov 29 05:32:20 compute-0 conmon[259791]: conmon ca1dd3923f5dd64c4051 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df.scope/container/memory.events
Nov 29 05:32:20 compute-0 podman[259775]: 2025-11-29 05:32:20.643354836 +0000 UTC m=+0.133321176 container died ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:32:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e48382776c319aea0436391fda6ffc3edd22b1a084e46db12967643aab6a28ca-merged.mount: Deactivated successfully.
Nov 29 05:32:20 compute-0 podman[259775]: 2025-11-29 05:32:20.680122725 +0000 UTC m=+0.170089025 container remove ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:32:20 compute-0 systemd[1]: libpod-conmon-ca1dd3923f5dd64c4051525d249f44fbb2fe31ea71a33019d7f667fedc2173df.scope: Deactivated successfully.
Nov 29 05:32:20 compute-0 podman[259816]: 2025-11-29 05:32:20.843892548 +0000 UTC m=+0.047103821 container create 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:32:20 compute-0 systemd[1]: Started libpod-conmon-7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d.scope.
Nov 29 05:32:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791b3aef8ec537b4b53692762922072e813fe4669d0151b950b92baedb303c7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791b3aef8ec537b4b53692762922072e813fe4669d0151b950b92baedb303c7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791b3aef8ec537b4b53692762922072e813fe4669d0151b950b92baedb303c7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791b3aef8ec537b4b53692762922072e813fe4669d0151b950b92baedb303c7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:32:20 compute-0 podman[259816]: 2025-11-29 05:32:20.915754516 +0000 UTC m=+0.118965799 container init 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:32:20 compute-0 sshd-session[259617]: Received disconnect from 45.120.216.232 port 39880:11: Bye Bye [preauth]
Nov 29 05:32:20 compute-0 sshd-session[259617]: Disconnected from authenticating user root 45.120.216.232 port 39880 [preauth]
Nov 29 05:32:20 compute-0 podman[259816]: 2025-11-29 05:32:20.921753312 +0000 UTC m=+0.124964575 container start 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:32:20 compute-0 podman[259816]: 2025-11-29 05:32:20.828425904 +0000 UTC m=+0.031637197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:32:20 compute-0 podman[259816]: 2025-11-29 05:32:20.924955929 +0000 UTC m=+0.128167222 container attach 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:32:21 compute-0 ceph-mon[75176]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:21 compute-0 goofy_knuth[259833]: {
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "osd_id": 0,
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "type": "bluestore"
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:     },
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "osd_id": 1,
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "type": "bluestore"
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:     },
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "osd_id": 2,
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:         "type": "bluestore"
Nov 29 05:32:21 compute-0 goofy_knuth[259833]:     }
Nov 29 05:32:21 compute-0 goofy_knuth[259833]: }
Nov 29 05:32:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:21 compute-0 systemd[1]: libpod-7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d.scope: Deactivated successfully.
Nov 29 05:32:21 compute-0 podman[259867]: 2025-11-29 05:32:21.9735005 +0000 UTC m=+0.044413416 container died 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:32:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-791b3aef8ec537b4b53692762922072e813fe4669d0151b950b92baedb303c7f-merged.mount: Deactivated successfully.
Nov 29 05:32:22 compute-0 podman[259867]: 2025-11-29 05:32:22.025218681 +0000 UTC m=+0.096131497 container remove 7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_knuth, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:32:22 compute-0 systemd[1]: libpod-conmon-7107b3e56832f622647312bee597cf202a718003bab1ec9269abbac8f799a03d.scope: Deactivated successfully.
Nov 29 05:32:22 compute-0 sudo[259709]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:32:22 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:32:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:32:22 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:32:22 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 41f4b115-5b63-4fe2-b6d4-2f47647f87f0 does not exist
Nov 29 05:32:22 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 61c127a9-4e64-44ba-8f1d-21774a3242da does not exist
Nov 29 05:32:22 compute-0 sudo[259882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:32:22 compute-0 sudo[259882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:22 compute-0 sudo[259882]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:22 compute-0 sudo[259907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:32:22 compute-0 sudo[259907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:32:22 compute-0 sudo[259907]: pam_unix(sudo:session): session closed for user root
Nov 29 05:32:23 compute-0 ceph-mon[75176]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:32:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:32:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:25 compute-0 ceph-mon[75176]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:25 compute-0 nova_compute[254898]: 2025-11-29 05:32:25.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:32:25 compute-0 nova_compute[254898]: 2025-11-29 05:32:25.971 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:32:26 compute-0 podman[259932]: 2025-11-29 05:32:26.033140024 +0000 UTC m=+0.076735528 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:32:26 compute-0 nova_compute[254898]: 2025-11-29 05:32:26.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:32:27 compute-0 ceph-mon[75176]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:27 compute-0 nova_compute[254898]: 2025-11-29 05:32:27.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:32:27 compute-0 nova_compute[254898]: 2025-11-29 05:32:27.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:32:27 compute-0 nova_compute[254898]: 2025-11-29 05:32:27.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:32:28 compute-0 nova_compute[254898]: 2025-11-29 05:32:28.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:32:28 compute-0 nova_compute[254898]: 2025-11-29 05:32:28.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:32:28 compute-0 nova_compute[254898]: 2025-11-29 05:32:28.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:32:29 compute-0 ceph-mon[75176]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.224 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.227 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.255 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.255 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.256 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.256 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.256 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:32:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:32:29 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2733328198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.701 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:32:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.917 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.918 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5122MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.918 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:32:29 compute-0 nova_compute[254898]: 2025-11-29 05:32:29.919 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:32:30 compute-0 nova_compute[254898]: 2025-11-29 05:32:30.015 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:32:30 compute-0 nova_compute[254898]: 2025-11-29 05:32:30.016 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:32:30 compute-0 nova_compute[254898]: 2025-11-29 05:32:30.046 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:32:30 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2733328198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:32:30 compute-0 ceph-mon[75176]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:32:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:32:30 compute-0 nova_compute[254898]: 2025-11-29 05:32:30.475 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:32:30 compute-0 nova_compute[254898]: 2025-11-29 05:32:30.483 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:32:30 compute-0 nova_compute[254898]: 2025-11-29 05:32:30.508 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:32:30 compute-0 nova_compute[254898]: 2025-11-29 05:32:30.512 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:32:30 compute-0 nova_compute[254898]: 2025-11-29 05:32:30.513 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:32:31 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3282282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:32:31 compute-0 nova_compute[254898]: 2025-11-29 05:32:31.240 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:32:31 compute-0 nova_compute[254898]: 2025-11-29 05:32:31.241 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:32:31 compute-0 nova_compute[254898]: 2025-11-29 05:32:31.241 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:32:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:32 compute-0 ceph-mon[75176]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:34 compute-0 ceph-mon[75176]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:36 compute-0 ceph-mon[75176]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:38 compute-0 ceph-mon[75176]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:40 compute-0 ceph-mon[75176]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:32:41
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms', 'backups']
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:32:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:42 compute-0 ceph-mon[75176]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:43 compute-0 podman[259995]: 2025-11-29 05:32:43.041031479 +0000 UTC m=+0.084298351 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:32:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:44 compute-0 ceph-mon[75176]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:47 compute-0 ceph-mon[75176]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:47 compute-0 podman[260015]: 2025-11-29 05:32:47.060714036 +0000 UTC m=+0.104041138 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 05:32:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:49 compute-0 ceph-mon[75176]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:51 compute-0 ceph-mon[75176]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:32:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:53 compute-0 ceph-mon[75176]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:55 compute-0 ceph-mon[75176]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:57 compute-0 ceph-mon[75176]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:57 compute-0 podman[260041]: 2025-11-29 05:32:57.062371441 +0000 UTC m=+0.098727550 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:32:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:59 compute-0 ceph-mon[75176]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:32:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:32:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:01 compute-0 ceph-mon[75176]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:03 compute-0 ceph-mon[75176]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:05 compute-0 ceph-mon[75176]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:07 compute-0 ceph-mon[75176]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:09 compute-0 ceph-mon[75176]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:11 compute-0 ceph-mon[75176]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:33:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:33:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:33:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:33:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:33:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:33:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:12 compute-0 ceph-mon[75176]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:33:13.746 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:33:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:33:13.747 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:33:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:33:13.747 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:33:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:14 compute-0 podman[260060]: 2025-11-29 05:33:14.021540638 +0000 UTC m=+0.078368342 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:33:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:33:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/207339170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:33:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:33:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/207339170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:33:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:14 compute-0 ceph-mon[75176]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/207339170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:33:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/207339170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:33:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:16 compute-0 ceph-mon[75176]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:18 compute-0 podman[260080]: 2025-11-29 05:33:18.063456004 +0000 UTC m=+0.116919316 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 05:33:18 compute-0 ceph-mon[75176]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:20 compute-0 ceph-mon[75176]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:22 compute-0 sudo[260107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:33:22 compute-0 sudo[260107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:22 compute-0 sudo[260107]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:22 compute-0 sudo[260132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:33:22 compute-0 sudo[260132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:22 compute-0 sudo[260132]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:22 compute-0 sudo[260157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:33:22 compute-0 sudo[260157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:22 compute-0 sudo[260157]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:22 compute-0 sudo[260182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:33:22 compute-0 sudo[260182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:22 compute-0 sudo[260182]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:23 compute-0 ceph-mon[75176]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:33:23 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:33:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:33:23 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:33:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:33:23 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:33:23 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 11b03f41-e8fc-409e-8cb3-7abe1bca0259 does not exist
Nov 29 05:33:23 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev bc3ea8e7-a975-4b72-ba53-81b6b665210c does not exist
Nov 29 05:33:23 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d0836cc1-173a-41c5-9ff3-cc9fd17973d3 does not exist
Nov 29 05:33:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:33:23 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:33:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:33:23 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:33:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:33:23 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:33:23 compute-0 sudo[260237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:33:23 compute-0 sudo[260237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:23 compute-0 sudo[260237]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:23 compute-0 sudo[260262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:33:23 compute-0 sudo[260262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:23 compute-0 sudo[260262]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:23 compute-0 sudo[260287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:33:23 compute-0 sudo[260287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:23 compute-0 sudo[260287]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:23 compute-0 sudo[260312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:33:23 compute-0 sudo[260312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:23 compute-0 podman[260377]: 2025-11-29 05:33:23.727572802 +0000 UTC m=+0.052819038 container create 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Nov 29 05:33:23 compute-0 systemd[1]: Started libpod-conmon-5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6.scope.
Nov 29 05:33:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:33:23 compute-0 podman[260377]: 2025-11-29 05:33:23.704580741 +0000 UTC m=+0.029826967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:33:23 compute-0 podman[260377]: 2025-11-29 05:33:23.819453938 +0000 UTC m=+0.144700234 container init 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:33:23 compute-0 podman[260377]: 2025-11-29 05:33:23.83125201 +0000 UTC m=+0.156498246 container start 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 05:33:23 compute-0 podman[260377]: 2025-11-29 05:33:23.835095453 +0000 UTC m=+0.160341709 container attach 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:33:23 compute-0 suspicious_neumann[260394]: 167 167
Nov 29 05:33:23 compute-0 systemd[1]: libpod-5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6.scope: Deactivated successfully.
Nov 29 05:33:23 compute-0 podman[260377]: 2025-11-29 05:33:23.840709948 +0000 UTC m=+0.165956224 container died 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:33:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d08265eb8fb6568ffea87887105f6733c0570a33388225b11aa0344fb8e1ac4a-merged.mount: Deactivated successfully.
Nov 29 05:33:23 compute-0 podman[260377]: 2025-11-29 05:33:23.895621135 +0000 UTC m=+0.220867371 container remove 5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:33:23 compute-0 systemd[1]: libpod-conmon-5934fdbddc21fb1a855d187fd869c9b92a5066e795f62f4e9cd964127fcf60c6.scope: Deactivated successfully.
Nov 29 05:33:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:33:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:33:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:33:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:33:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:33:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:33:24 compute-0 podman[260418]: 2025-11-29 05:33:24.130920572 +0000 UTC m=+0.070751128 container create e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 05:33:24 compute-0 systemd[1]: Started libpod-conmon-e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5.scope.
Nov 29 05:33:24 compute-0 podman[260418]: 2025-11-29 05:33:24.10290297 +0000 UTC m=+0.042733586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:33:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:24 compute-0 podman[260418]: 2025-11-29 05:33:24.224445417 +0000 UTC m=+0.164276033 container init e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 05:33:24 compute-0 podman[260418]: 2025-11-29 05:33:24.239915468 +0000 UTC m=+0.179746034 container start e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:33:24 compute-0 podman[260418]: 2025-11-29 05:33:24.24373001 +0000 UTC m=+0.183560616 container attach e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:33:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:24 compute-0 nova_compute[254898]: 2025-11-29 05:33:24.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:24 compute-0 nova_compute[254898]: 2025-11-29 05:33:24.959 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 05:33:24 compute-0 nova_compute[254898]: 2025-11-29 05:33:24.977 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 05:33:24 compute-0 nova_compute[254898]: 2025-11-29 05:33:24.978 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:24 compute-0 nova_compute[254898]: 2025-11-29 05:33:24.978 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 05:33:24 compute-0 nova_compute[254898]: 2025-11-29 05:33:24.991 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:25 compute-0 ceph-mon[75176]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:25 compute-0 boring_ptolemy[260434]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:33:25 compute-0 boring_ptolemy[260434]: --> relative data size: 1.0
Nov 29 05:33:25 compute-0 boring_ptolemy[260434]: --> All data devices are unavailable
Nov 29 05:33:25 compute-0 systemd[1]: libpod-e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5.scope: Deactivated successfully.
Nov 29 05:33:25 compute-0 systemd[1]: libpod-e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5.scope: Consumed 1.177s CPU time.
Nov 29 05:33:25 compute-0 podman[260418]: 2025-11-29 05:33:25.477610833 +0000 UTC m=+1.417441399 container died e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:33:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d98678e2324ede08f63f1e347859e5fb24e86657c67e44a25d121486cd0d495-merged.mount: Deactivated successfully.
Nov 29 05:33:25 compute-0 podman[260418]: 2025-11-29 05:33:25.5437287 +0000 UTC m=+1.483559226 container remove e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_ptolemy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:33:25 compute-0 systemd[1]: libpod-conmon-e8957a28093afe91172e99ba22fb5d42250b6fc4358cc7d2651d2277a0646fa5.scope: Deactivated successfully.
Nov 29 05:33:25 compute-0 sudo[260312]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:25 compute-0 sudo[260474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:33:25 compute-0 sudo[260474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:25 compute-0 sudo[260474]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:25 compute-0 sudo[260499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:33:25 compute-0 sudo[260499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:25 compute-0 sudo[260499]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:25 compute-0 sudo[260524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:33:25 compute-0 sudo[260524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:25 compute-0 sudo[260524]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:25 compute-0 sudo[260549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:33:25 compute-0 sudo[260549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:26 compute-0 nova_compute[254898]: 2025-11-29 05:33:26.002 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:26 compute-0 podman[260617]: 2025-11-29 05:33:26.303923134 +0000 UTC m=+0.064146940 container create a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:33:26 compute-0 systemd[1]: Started libpod-conmon-a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11.scope.
Nov 29 05:33:26 compute-0 podman[260617]: 2025-11-29 05:33:26.278986336 +0000 UTC m=+0.039210222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:33:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:33:26 compute-0 podman[260617]: 2025-11-29 05:33:26.413491584 +0000 UTC m=+0.173715390 container init a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:33:26 compute-0 podman[260617]: 2025-11-29 05:33:26.42708734 +0000 UTC m=+0.187311186 container start a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 05:33:26 compute-0 podman[260617]: 2025-11-29 05:33:26.431552998 +0000 UTC m=+0.191776794 container attach a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 05:33:26 compute-0 hungry_banzai[260633]: 167 167
Nov 29 05:33:26 compute-0 systemd[1]: libpod-a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11.scope: Deactivated successfully.
Nov 29 05:33:26 compute-0 podman[260617]: 2025-11-29 05:33:26.434942859 +0000 UTC m=+0.195166705 container died a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:33:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d14a72fa03860050ad7401c0eb532e12b22ca4c086bd4a04b63e87bc56635bd-merged.mount: Deactivated successfully.
Nov 29 05:33:26 compute-0 podman[260617]: 2025-11-29 05:33:26.479028027 +0000 UTC m=+0.239251833 container remove a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banzai, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:33:26 compute-0 systemd[1]: libpod-conmon-a507f1073dcdf939f124c27557c4e3ddef2af62d6647f53ca9d4d5ffd8567c11.scope: Deactivated successfully.
Nov 29 05:33:26 compute-0 podman[260656]: 2025-11-29 05:33:26.726491985 +0000 UTC m=+0.076863945 container create 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:33:26 compute-0 systemd[1]: Started libpod-conmon-09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d.scope.
Nov 29 05:33:26 compute-0 podman[260656]: 2025-11-29 05:33:26.695982943 +0000 UTC m=+0.046354963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:33:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e627a1d72690ed4695d0d84b8f784a435603657e08ddd8698ddcabf2441fc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e627a1d72690ed4695d0d84b8f784a435603657e08ddd8698ddcabf2441fc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e627a1d72690ed4695d0d84b8f784a435603657e08ddd8698ddcabf2441fc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e627a1d72690ed4695d0d84b8f784a435603657e08ddd8698ddcabf2441fc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:26 compute-0 podman[260656]: 2025-11-29 05:33:26.837863789 +0000 UTC m=+0.188235799 container init 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:33:26 compute-0 podman[260656]: 2025-11-29 05:33:26.851007314 +0000 UTC m=+0.201379274 container start 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:33:26 compute-0 podman[260656]: 2025-11-29 05:33:26.855780849 +0000 UTC m=+0.206152869 container attach 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:33:27 compute-0 ceph-mon[75176]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:27 compute-0 quirky_davinci[260675]: {
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:     "0": [
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:         {
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "devices": [
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "/dev/loop3"
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             ],
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_name": "ceph_lv0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_size": "21470642176",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "name": "ceph_lv0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "tags": {
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.cluster_name": "ceph",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.crush_device_class": "",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.encrypted": "0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.osd_id": "0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.type": "block",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.vdo": "0"
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             },
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "type": "block",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "vg_name": "ceph_vg0"
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:         }
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:     ],
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:     "1": [
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:         {
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "devices": [
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "/dev/loop4"
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             ],
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_name": "ceph_lv1",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_size": "21470642176",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "name": "ceph_lv1",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "tags": {
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.cluster_name": "ceph",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.crush_device_class": "",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.encrypted": "0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.osd_id": "1",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.type": "block",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.vdo": "0"
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             },
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "type": "block",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "vg_name": "ceph_vg1"
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:         }
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:     ],
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:     "2": [
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:         {
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "devices": [
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "/dev/loop5"
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             ],
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_name": "ceph_lv2",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_size": "21470642176",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "name": "ceph_lv2",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "tags": {
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.cluster_name": "ceph",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.crush_device_class": "",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.encrypted": "0",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.osd_id": "2",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.type": "block",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:                 "ceph.vdo": "0"
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             },
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "type": "block",
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:             "vg_name": "ceph_vg2"
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:         }
Nov 29 05:33:27 compute-0 quirky_davinci[260675]:     ]
Nov 29 05:33:27 compute-0 quirky_davinci[260675]: }
Nov 29 05:33:27 compute-0 systemd[1]: libpod-09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d.scope: Deactivated successfully.
Nov 29 05:33:27 compute-0 podman[260684]: 2025-11-29 05:33:27.735637195 +0000 UTC m=+0.023397493 container died 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:33:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-94e627a1d72690ed4695d0d84b8f784a435603657e08ddd8698ddcabf2441fc2-merged.mount: Deactivated successfully.
Nov 29 05:33:27 compute-0 podman[260684]: 2025-11-29 05:33:27.783811461 +0000 UTC m=+0.071571749 container remove 09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:33:27 compute-0 systemd[1]: libpod-conmon-09ff0573cb20bd1401cfee0100d1091d4e09bafecfebd175dfdee0da182f6d6d.scope: Deactivated successfully.
Nov 29 05:33:27 compute-0 sudo[260549]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:27 compute-0 podman[260685]: 2025-11-29 05:33:27.829201221 +0000 UTC m=+0.084706255 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 05:33:27 compute-0 sudo[260718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:33:27 compute-0 sudo[260718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:27 compute-0 sudo[260718]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:27 compute-0 nova_compute[254898]: 2025-11-29 05:33:27.951 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:27 compute-0 sudo[260743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:33:27 compute-0 sudo[260743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:27 compute-0 sudo[260743]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:28 compute-0 sudo[260768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:33:28 compute-0 sudo[260768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:28 compute-0 sudo[260768]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:28 compute-0 sudo[260793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:33:28 compute-0 sudo[260793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:28 compute-0 podman[260858]: 2025-11-29 05:33:28.521043065 +0000 UTC m=+0.052537913 container create f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 05:33:28 compute-0 systemd[1]: Started libpod-conmon-f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776.scope.
Nov 29 05:33:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:33:28 compute-0 podman[260858]: 2025-11-29 05:33:28.585220375 +0000 UTC m=+0.116715223 container init f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:33:28 compute-0 podman[260858]: 2025-11-29 05:33:28.493414151 +0000 UTC m=+0.024909019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:33:28 compute-0 podman[260858]: 2025-11-29 05:33:28.591108076 +0000 UTC m=+0.122602894 container start f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:33:28 compute-0 podman[260858]: 2025-11-29 05:33:28.594061087 +0000 UTC m=+0.125555915 container attach f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:33:28 compute-0 brave_meninsky[260874]: 167 167
Nov 29 05:33:28 compute-0 systemd[1]: libpod-f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776.scope: Deactivated successfully.
Nov 29 05:33:28 compute-0 podman[260858]: 2025-11-29 05:33:28.595335878 +0000 UTC m=+0.126830716 container died f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:33:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd3c6895d533787623bbd2b2d1c187b1c478110fbe02fbc2b55e6a4468fd5c34-merged.mount: Deactivated successfully.
Nov 29 05:33:28 compute-0 podman[260858]: 2025-11-29 05:33:28.630547452 +0000 UTC m=+0.162042270 container remove f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:33:28 compute-0 systemd[1]: libpod-conmon-f6e6ae280690a3094eb9ce8c91b415cdc16df9d1c1cfd18794a56cb1de920776.scope: Deactivated successfully.
Nov 29 05:33:28 compute-0 podman[260896]: 2025-11-29 05:33:28.773121474 +0000 UTC m=+0.036226660 container create 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:33:28 compute-0 systemd[1]: Started libpod-conmon-2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae.scope.
Nov 29 05:33:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ac030b0dda314050003c06d493c7a662629b4a4ff6e64cb1c4d6f702e15858/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:28 compute-0 podman[260896]: 2025-11-29 05:33:28.756402673 +0000 UTC m=+0.019507899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ac030b0dda314050003c06d493c7a662629b4a4ff6e64cb1c4d6f702e15858/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ac030b0dda314050003c06d493c7a662629b4a4ff6e64cb1c4d6f702e15858/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65ac030b0dda314050003c06d493c7a662629b4a4ff6e64cb1c4d6f702e15858/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:33:28 compute-0 podman[260896]: 2025-11-29 05:33:28.865668265 +0000 UTC m=+0.128773501 container init 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:33:28 compute-0 podman[260896]: 2025-11-29 05:33:28.873488554 +0000 UTC m=+0.136593750 container start 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:33:28 compute-0 podman[260896]: 2025-11-29 05:33:28.876633149 +0000 UTC m=+0.139738355 container attach 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 05:33:28 compute-0 nova_compute[254898]: 2025-11-29 05:33:28.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:28 compute-0 nova_compute[254898]: 2025-11-29 05:33:28.955 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:28 compute-0 nova_compute[254898]: 2025-11-29 05:33:28.956 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:29 compute-0 ceph-mon[75176]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]: {
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "osd_id": 0,
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "type": "bluestore"
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:     },
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "osd_id": 1,
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "type": "bluestore"
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:     },
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "osd_id": 2,
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:         "type": "bluestore"
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]:     }
Nov 29 05:33:29 compute-0 thirsty_varahamihira[260912]: }
Nov 29 05:33:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:29 compute-0 systemd[1]: libpod-2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae.scope: Deactivated successfully.
Nov 29 05:33:29 compute-0 podman[260896]: 2025-11-29 05:33:29.909909277 +0000 UTC m=+1.173014463 container died 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:33:29 compute-0 systemd[1]: libpod-2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae.scope: Consumed 1.042s CPU time.
Nov 29 05:33:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-65ac030b0dda314050003c06d493c7a662629b4a4ff6e64cb1c4d6f702e15858-merged.mount: Deactivated successfully.
Nov 29 05:33:29 compute-0 nova_compute[254898]: 2025-11-29 05:33:29.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:29 compute-0 nova_compute[254898]: 2025-11-29 05:33:29.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:33:29 compute-0 nova_compute[254898]: 2025-11-29 05:33:29.955 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:33:29 compute-0 podman[260896]: 2025-11-29 05:33:29.97003261 +0000 UTC m=+1.233137796 container remove 2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:33:29 compute-0 systemd[1]: libpod-conmon-2e7589a2d002456eb5705309a1f1ee3085c55ec47eefdaf750e562e3913821ae.scope: Deactivated successfully.
Nov 29 05:33:29 compute-0 nova_compute[254898]: 2025-11-29 05:33:29.984 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:33:29 compute-0 nova_compute[254898]: 2025-11-29 05:33:29.985 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:29 compute-0 nova_compute[254898]: 2025-11-29 05:33:29.985 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:30 compute-0 sudo[260793]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:33:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:33:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.016 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.017 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.017 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.017 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.017 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:33:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:33:30 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ac24e11f-9ee7-4ed3-b713-fc9b40f11d8b does not exist
Nov 29 05:33:30 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev b518b32f-4a19-4b8d-8d9a-246606b06a74 does not exist
Nov 29 05:33:30 compute-0 sudo[260959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:33:30 compute-0 sudo[260959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:30 compute-0 sudo[260959]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:30 compute-0 sudo[260984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:33:30 compute-0 sudo[260984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:33:30 compute-0 sudo[260984]: pam_unix(sudo:session): session closed for user root
Nov 29 05:33:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:33:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654780220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.432 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.675 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.678 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5150MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.678 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.679 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.941 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:33:30 compute-0 nova_compute[254898]: 2025-11-29 05:33:30.941 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:33:31 compute-0 ceph-mon[75176]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:33:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:33:31 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3654780220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:33:31 compute-0 nova_compute[254898]: 2025-11-29 05:33:31.103 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing inventories for resource provider 59594bc8-0143-475b-913f-cbe106b48966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 05:33:31 compute-0 nova_compute[254898]: 2025-11-29 05:33:31.238 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating ProviderTree inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 05:33:31 compute-0 nova_compute[254898]: 2025-11-29 05:33:31.239 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 05:33:31 compute-0 nova_compute[254898]: 2025-11-29 05:33:31.274 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing aggregate associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 05:33:31 compute-0 nova_compute[254898]: 2025-11-29 05:33:31.311 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing trait associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 05:33:31 compute-0 nova_compute[254898]: 2025-11-29 05:33:31.336 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:33:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:33:31 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032297455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:33:31 compute-0 nova_compute[254898]: 2025-11-29 05:33:31.852 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:33:31 compute-0 nova_compute[254898]: 2025-11-29 05:33:31.859 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:33:31 compute-0 nova_compute[254898]: 2025-11-29 05:33:31.883 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:33:31 compute-0 nova_compute[254898]: 2025-11-29 05:33:31.885 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:33:31 compute-0 nova_compute[254898]: 2025-11-29 05:33:31.886 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:33:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:32 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1032297455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:33:32 compute-0 nova_compute[254898]: 2025-11-29 05:33:32.855 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:33:32 compute-0 nova_compute[254898]: 2025-11-29 05:33:32.855 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:33:33 compute-0 ceph-mon[75176]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:35 compute-0 ceph-mon[75176]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:35 compute-0 sshd-session[261052]: Received disconnect from 45.120.216.232 port 38778:11: Bye Bye [preauth]
Nov 29 05:33:35 compute-0 sshd-session[261052]: Disconnected from authenticating user root 45.120.216.232 port 38778 [preauth]
Nov 29 05:33:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:37 compute-0 ceph-mon[75176]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:39 compute-0 ceph-mon[75176]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:41 compute-0 ceph-mon[75176]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:33:41
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.control']
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:33:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:43 compute-0 ceph-mon[75176]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:45 compute-0 podman[261058]: 2025-11-29 05:33:45.031185155 +0000 UTC m=+0.080264458 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 05:33:45 compute-0 ceph-mon[75176]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:45 compute-0 sshd-session[261056]: Received disconnect from 152.32.145.111 port 33224:11: Bye Bye [preauth]
Nov 29 05:33:45 compute-0 sshd-session[261056]: Disconnected from authenticating user root 152.32.145.111 port 33224 [preauth]
Nov 29 05:33:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:47 compute-0 ceph-mon[75176]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:49 compute-0 ceph-mon[75176]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:49 compute-0 podman[261079]: 2025-11-29 05:33:49.099024971 +0000 UTC m=+0.139848897 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 05:33:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:51 compute-0 ceph-mon[75176]: pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:33:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:52 compute-0 sshd-session[261054]: Connection closed by 101.47.141.125 port 56008 [preauth]
Nov 29 05:33:53 compute-0 ceph-mon[75176]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:54 compute-0 ceph-mon[75176]: pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:56 compute-0 ceph-mon[75176]: pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:58 compute-0 podman[261107]: 2025-11-29 05:33:58.002170376 +0000 UTC m=+0.057551383 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 05:33:58 compute-0 ceph-mon[75176]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:33:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:33:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:34:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 29 05:34:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 05:34:01 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.016178) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441016249, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2053, "num_deletes": 251, "total_data_size": 3471571, "memory_usage": 3515200, "flush_reason": "Manual Compaction"}
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 29 05:34:01 compute-0 ceph-mon[75176]: pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441048172, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3406772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16324, "largest_seqno": 18376, "table_properties": {"data_size": 3397419, "index_size": 5911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18498, "raw_average_key_size": 19, "raw_value_size": 3378759, "raw_average_value_size": 3625, "num_data_blocks": 268, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394208, "oldest_key_time": 1764394208, "file_creation_time": 1764394441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 32130 microseconds, and 14951 cpu microseconds.
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.048307) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3406772 bytes OK
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.048347) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.050403) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.050434) EVENT_LOG_v1 {"time_micros": 1764394441050423, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.050480) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3462985, prev total WAL file size 3462985, number of live WAL files 2.
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.052101) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3326KB)], [38(7512KB)]
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441052191, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11099450, "oldest_snapshot_seqno": -1}
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4410 keys, 9346152 bytes, temperature: kUnknown
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441148620, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9346152, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9312961, "index_size": 21049, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106563, "raw_average_key_size": 24, "raw_value_size": 9229618, "raw_average_value_size": 2092, "num_data_blocks": 894, "num_entries": 4410, "num_filter_entries": 4410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.148830) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9346152 bytes
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.150532) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.0 rd, 96.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.0) write-amplify(2.7) OK, records in: 4928, records dropped: 518 output_compression: NoCompression
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.150556) EVENT_LOG_v1 {"time_micros": 1764394441150541, "job": 18, "event": "compaction_finished", "compaction_time_micros": 96488, "compaction_time_cpu_micros": 41477, "output_level": 6, "num_output_files": 1, "total_output_size": 9346152, "num_input_records": 4928, "num_output_records": 4410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441151317, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394441152688, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.051962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.152713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.152717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.152718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.152719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:01 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:01.152723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:34:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 29 05:34:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 05:34:02 compute-0 ceph-mon[75176]: osdmap e121: 3 total, 3 up, 3 in
Nov 29 05:34:02 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 29 05:34:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 29 05:34:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 05:34:03 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 29 05:34:03 compute-0 ceph-mon[75176]: pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:34:03 compute-0 ceph-mon[75176]: osdmap e122: 3 total, 3 up, 3 in
Nov 29 05:34:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:34:04 compute-0 ceph-mon[75176]: osdmap e123: 3 total, 3 up, 3 in
Nov 29 05:34:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 29 05:34:05 compute-0 ceph-mon[75176]: pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:34:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 05:34:05 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 29 05:34:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 25 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 5.1 MiB/s wr, 63 op/s
Nov 29 05:34:06 compute-0 ceph-mon[75176]: osdmap e124: 3 total, 3 up, 3 in
Nov 29 05:34:07 compute-0 ceph-mon[75176]: pgmap v877: 305 pgs: 305 active+clean; 25 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 5.1 MiB/s wr, 63 op/s
Nov 29 05:34:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 25 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 4.2 MiB/s wr, 52 op/s
Nov 29 05:34:09 compute-0 ceph-mon[75176]: pgmap v878: 305 pgs: 305 active+clean; 25 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 4.2 MiB/s wr, 52 op/s
Nov 29 05:34:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 29 05:34:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 05:34:09 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 29 05:34:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 5.9 MiB/s wr, 54 op/s
Nov 29 05:34:10 compute-0 ceph-mon[75176]: osdmap e125: 3 total, 3 up, 3 in
Nov 29 05:34:10 compute-0 ceph-mon[75176]: pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 185 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 5.9 MiB/s wr, 54 op/s
Nov 29 05:34:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:34:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:34:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:34:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:34:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:34:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:34:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 29 05:34:12 compute-0 ceph-mon[75176]: pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Nov 29 05:34:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:34:13.748 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:34:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:34:13.749 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:34:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:34:13.749 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:34:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 1.8 MiB/s wr, 7 op/s
Nov 29 05:34:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:34:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/987643512' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:34:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:34:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/987643512' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:34:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.919493) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454919542, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 408, "num_deletes": 250, "total_data_size": 271339, "memory_usage": 279968, "flush_reason": "Manual Compaction"}
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454925052, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 255149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18377, "largest_seqno": 18784, "table_properties": {"data_size": 252686, "index_size": 563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6229, "raw_average_key_size": 19, "raw_value_size": 247774, "raw_average_value_size": 781, "num_data_blocks": 25, "num_entries": 317, "num_filter_entries": 317, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394442, "oldest_key_time": 1764394442, "file_creation_time": 1764394454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5611 microseconds, and 2713 cpu microseconds.
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.925105) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 255149 bytes OK
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.925128) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.926949) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.926972) EVENT_LOG_v1 {"time_micros": 1764394454926964, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.926993) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 268761, prev total WAL file size 268761, number of live WAL files 2.
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.927523) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(249KB)], [41(9127KB)]
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454927575, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9601301, "oldest_snapshot_seqno": -1}
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4217 keys, 6323138 bytes, temperature: kUnknown
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454980050, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6323138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6295642, "index_size": 15867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10565, "raw_key_size": 103018, "raw_average_key_size": 24, "raw_value_size": 6219961, "raw_average_value_size": 1474, "num_data_blocks": 668, "num_entries": 4217, "num_filter_entries": 4217, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.980397) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6323138 bytes
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.981961) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.6 rd, 120.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 8.9 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(62.4) write-amplify(24.8) OK, records in: 4727, records dropped: 510 output_compression: NoCompression
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.981993) EVENT_LOG_v1 {"time_micros": 1764394454981977, "job": 20, "event": "compaction_finished", "compaction_time_micros": 52583, "compaction_time_cpu_micros": 30098, "output_level": 6, "num_output_files": 1, "total_output_size": 6323138, "num_input_records": 4727, "num_output_records": 4217, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454982250, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394454985502, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.927476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.985621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.985629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.985632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.985635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:14 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:14.985638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:15 compute-0 ceph-mon[75176]: pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.8 KiB/s rd, 1.8 MiB/s wr, 7 op/s
Nov 29 05:34:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/987643512' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:34:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/987643512' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:34:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.6 MiB/s wr, 6 op/s
Nov 29 05:34:16 compute-0 podman[261126]: 2025-11-29 05:34:16.016123878 +0000 UTC m=+0.069923749 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 05:34:17 compute-0 ceph-mon[75176]: pgmap v883: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.6 MiB/s wr, 6 op/s
Nov 29 05:34:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.6 MiB/s wr, 6 op/s
Nov 29 05:34:19 compute-0 ceph-mon[75176]: pgmap v884: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.6 MiB/s wr, 6 op/s
Nov 29 05:34:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 408 B/s rd, 102 B/s wr, 0 op/s
Nov 29 05:34:20 compute-0 podman[261147]: 2025-11-29 05:34:20.037092909 +0000 UTC m=+0.090973624 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 05:34:21 compute-0 ceph-mon[75176]: pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 408 B/s rd, 102 B/s wr, 0 op/s
Nov 29 05:34:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 29 05:34:23 compute-0 ceph-mon[75176]: pgmap v886: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:34:23 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:23.593+0000 7fa4c75e5640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:23 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:23.593+0000 7fa4c75e5640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:23 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:23.593+0000 7fa4c75e5640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:23 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:23.593+0000 7fa4c75e5640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:23 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:23.593+0000 7fa4c75e5640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "format": "json"}]: dispatch
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:34:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:34:23 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:34:24 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:25 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:25 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "format": "json"}]: dispatch
Nov 29 05:34:25 compute-0 ceph-mon[75176]: pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:34:25 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.csskcz(active, since 25m)
Nov 29 05:34:25 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 05:34:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/86dc64fd-e983-41fb-88c2-0ca9782c4406/.meta.tmp'
Nov 29 05:34:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/86dc64fd-e983-41fb-88c2-0ca9782c4406/.meta.tmp' to config b'/volumes/_nogroup/86dc64fd-e983-41fb-88c2-0ca9782c4406/.meta'
Nov 29 05:34:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 05:34:25 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "format": "json"}]: dispatch
Nov 29 05:34:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 05:34:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 05:34:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:34:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s wr, 0 op/s
Nov 29 05:34:26 compute-0 ceph-mon[75176]: mgrmap e10: compute-0.csskcz(active, since 25m)
Nov 29 05:34:26 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:26 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "format": "json"}]: dispatch
Nov 29 05:34:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:26 compute-0 nova_compute[254898]: 2025-11-29 05:34:26.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:34:27 compute-0 ceph-mon[75176]: pgmap v888: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s wr, 0 op/s
Nov 29 05:34:27 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:34:27.492 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:34:27 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:34:27.493 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 05:34:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 05:34:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/139756d1-c4a7-4d9e-860e-88e58c898640/.meta.tmp'
Nov 29 05:34:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/139756d1-c4a7-4d9e-860e-88e58c898640/.meta.tmp' to config b'/volumes/_nogroup/139756d1-c4a7-4d9e-860e-88e58c898640/.meta'
Nov 29 05:34:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 05:34:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "format": "json"}]: dispatch
Nov 29 05:34:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 05:34:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 05:34:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:34:27 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s wr, 0 op/s
Nov 29 05:34:28 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:28 compute-0 nova_compute[254898]: 2025-11-29 05:34:28.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:34:28 compute-0 nova_compute[254898]: 2025-11-29 05:34:28.983 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:34:28 compute-0 nova_compute[254898]: 2025-11-29 05:34:28.984 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:34:29 compute-0 podman[261187]: 2025-11-29 05:34:29.010818996 +0000 UTC m=+0.060895082 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 05:34:29 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:29 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "format": "json"}]: dispatch
Nov 29 05:34:29 compute-0 ceph-mon[75176]: pgmap v889: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s wr, 0 op/s
Nov 29 05:34:29 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:34:29.495 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:34:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Nov 29 05:34:29 compute-0 nova_compute[254898]: 2025-11-29 05:34:29.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:34:29 compute-0 nova_compute[254898]: 2025-11-29 05:34:29.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:34:29 compute-0 nova_compute[254898]: 2025-11-29 05:34:29.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:34:29 compute-0 nova_compute[254898]: 2025-11-29 05:34:29.984 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:34:29 compute-0 nova_compute[254898]: 2025-11-29 05:34:29.984 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:34:29 compute-0 nova_compute[254898]: 2025-11-29 05:34:29.985 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:34:29 compute-0 nova_compute[254898]: 2025-11-29 05:34:29.985 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:34:29 compute-0 nova_compute[254898]: 2025-11-29 05:34:29.985 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:34:30 compute-0 sudo[261226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:34:30 compute-0 sudo[261226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:30 compute-0 sudo[261226]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:30 compute-0 sudo[261251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:34:30 compute-0 sudo[261251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:30 compute-0 sudo[261251]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:30 compute-0 sudo[261276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:34:30 compute-0 sudo[261276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:30 compute-0 sudo[261276]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:30 compute-0 sudo[261301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:34:30 compute-0 sudo[261301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:34:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/666548501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:34:30 compute-0 nova_compute[254898]: 2025-11-29 05:34:30.450 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:34:30 compute-0 nova_compute[254898]: 2025-11-29 05:34:30.609 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:34:30 compute-0 nova_compute[254898]: 2025-11-29 05:34:30.610 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5179MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:34:30 compute-0 nova_compute[254898]: 2025-11-29 05:34:30.610 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:34:30 compute-0 nova_compute[254898]: 2025-11-29 05:34:30.610 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:34:30 compute-0 nova_compute[254898]: 2025-11-29 05:34:30.696 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:34:30 compute-0 nova_compute[254898]: 2025-11-29 05:34:30.697 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:34:30 compute-0 nova_compute[254898]: 2025-11-29 05:34:30.723 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:34:30 compute-0 sudo[261301]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:34:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:34:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:34:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:34:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:34:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:34:30 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 7916a9c3-dca1-42d8-aabf-32a9a58e0024 does not exist
Nov 29 05:34:30 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev f85a386d-2c5f-429a-8bc3-2f942349a6c9 does not exist
Nov 29 05:34:30 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev e9f35225-59a4-4afc-a4fb-03b78f89acbe does not exist
Nov 29 05:34:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:34:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:34:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:34:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:34:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:34:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:34:30 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 29 05:34:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 05:34:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 05:34:30 compute-0 sudo[261379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:34:30 compute-0 sudo[261379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:30 compute-0 sudo[261379]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:31 compute-0 sudo[261404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:34:31 compute-0 sudo[261404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:31 compute-0 sudo[261404]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:31 compute-0 sudo[261429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:34:31 compute-0 sudo[261429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:31 compute-0 sudo[261429]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:31 compute-0 ceph-mon[75176]: pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Nov 29 05:34:31 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/666548501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:34:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:34:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:34:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:34:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:34:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:34:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:34:31 compute-0 sudo[261454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:34:31 compute-0 sudo[261454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:34:31 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257543193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:34:31 compute-0 nova_compute[254898]: 2025-11-29 05:34:31.222 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:34:31 compute-0 nova_compute[254898]: 2025-11-29 05:34:31.226 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:34:31 compute-0 nova_compute[254898]: 2025-11-29 05:34:31.241 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:34:31 compute-0 nova_compute[254898]: 2025-11-29 05:34:31.242 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:34:31 compute-0 nova_compute[254898]: 2025-11-29 05:34:31.243 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:34:31 compute-0 podman[261522]: 2025-11-29 05:34:31.428576161 +0000 UTC m=+0.053012302 container create 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 05:34:31 compute-0 systemd[1]: Started libpod-conmon-85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48.scope.
Nov 29 05:34:31 compute-0 podman[261522]: 2025-11-29 05:34:31.399039122 +0000 UTC m=+0.023475343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:34:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:34:31 compute-0 podman[261522]: 2025-11-29 05:34:31.520824165 +0000 UTC m=+0.145260346 container init 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 05:34:31 compute-0 podman[261522]: 2025-11-29 05:34:31.535315194 +0000 UTC m=+0.159751335 container start 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:34:31 compute-0 podman[261522]: 2025-11-29 05:34:31.539256667 +0000 UTC m=+0.163692828 container attach 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:34:31 compute-0 eloquent_mirzakhani[261538]: 167 167
Nov 29 05:34:31 compute-0 systemd[1]: libpod-85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48.scope: Deactivated successfully.
Nov 29 05:34:31 compute-0 conmon[261538]: conmon 85b974d9c2fa292f670a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48.scope/container/memory.events
Nov 29 05:34:31 compute-0 podman[261522]: 2025-11-29 05:34:31.542083625 +0000 UTC m=+0.166519766 container died 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "format": "json"}]: dispatch
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e5093fadb1c1d305f81538b30b4c06cfd218029f847f491512af115e9280082-merged.mount: Deactivated successfully.
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.571+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86dc64fd-e983-41fb-88c2-0ca9782c4406' of type subvolume
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '86dc64fd-e983-41fb-88c2-0ca9782c4406' of type subvolume
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/86dc64fd-e983-41fb-88c2-0ca9782c4406'' moved to trashcan
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:86dc64fd-e983-41fb-88c2-0ca9782c4406, vol_name:cephfs) < ""
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.586+0000 7fa4ca5eb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.586+0000 7fa4ca5eb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.586+0000 7fa4ca5eb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.586+0000 7fa4ca5eb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.586+0000 7fa4ca5eb640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 podman[261522]: 2025-11-29 05:34:31.592146477 +0000 UTC m=+0.216582608 container remove 85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:34:31 compute-0 systemd[1]: libpod-conmon-85b974d9c2fa292f670abc9ed4b9e93318703cd27d153bfc5b2ba1375807bc48.scope: Deactivated successfully.
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.609+0000 7fa4c8de8640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.609+0000 7fa4c8de8640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.609+0000 7fa4c8de8640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.609+0000 7fa4c8de8640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.609+0000 7fa4c8de8640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:34:31 compute-0 podman[261585]: 2025-11-29 05:34:31.797992357 +0000 UTC m=+0.057847139 container create 900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 05:34:31 compute-0 systemd[1]: Started libpod-conmon-900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9.scope.
Nov 29 05:34:31 compute-0 podman[261585]: 2025-11-29 05:34:31.774016772 +0000 UTC m=+0.033871544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:34:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1e7a5585910f9203c387c24214edf5424f052b130dad75064fe241e4ea15ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1e7a5585910f9203c387c24214edf5424f052b130dad75064fe241e4ea15ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1e7a5585910f9203c387c24214edf5424f052b130dad75064fe241e4ea15ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1e7a5585910f9203c387c24214edf5424f052b130dad75064fe241e4ea15ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a1e7a5585910f9203c387c24214edf5424f052b130dad75064fe241e4ea15ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "format": "json"}]: dispatch
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:139756d1-c4a7-4d9e-860e-88e58c898640, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:139756d1-c4a7-4d9e-860e-88e58c898640, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '139756d1-c4a7-4d9e-860e-88e58c898640' of type subvolume
Nov 29 05:34:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:31.907+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '139756d1-c4a7-4d9e-860e-88e58c898640' of type subvolume
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/139756d1-c4a7-4d9e-860e-88e58c898640'' moved to trashcan
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:139756d1-c4a7-4d9e-860e-88e58c898640, vol_name:cephfs) < ""
Nov 29 05:34:31 compute-0 podman[261585]: 2025-11-29 05:34:31.922024874 +0000 UTC m=+0.181879636 container init 900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pare, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 05:34:31 compute-0 podman[261585]: 2025-11-29 05:34:31.936135383 +0000 UTC m=+0.195990135 container start 900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:34:31 compute-0 podman[261585]: 2025-11-29 05:34:31.940876506 +0000 UTC m=+0.200731258 container attach 900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pare, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 05:34:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Nov 29 05:34:32 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 29 05:34:32 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3257543193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:34:32 compute-0 rsyslogd[1003]: imjournal from <np0005539482:ceph-mon>: begin to drop messages due to rate-limiting
Nov 29 05:34:32 compute-0 nova_compute[254898]: 2025-11-29 05:34:32.242 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:34:32 compute-0 nova_compute[254898]: 2025-11-29 05:34:32.243 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:34:32 compute-0 nova_compute[254898]: 2025-11-29 05:34:32.243 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:34:32 compute-0 nova_compute[254898]: 2025-11-29 05:34:32.370 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:34:32 compute-0 nova_compute[254898]: 2025-11-29 05:34:32.371 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:34:32 compute-0 nova_compute[254898]: 2025-11-29 05:34:32.371 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:34:32 compute-0 nova_compute[254898]: 2025-11-29 05:34:32.371 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:34:32 compute-0 youthful_pare[261601]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:34:32 compute-0 youthful_pare[261601]: --> relative data size: 1.0
Nov 29 05:34:32 compute-0 youthful_pare[261601]: --> All data devices are unavailable
Nov 29 05:34:32 compute-0 systemd[1]: libpod-900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9.scope: Deactivated successfully.
Nov 29 05:34:32 compute-0 podman[261585]: 2025-11-29 05:34:32.938595542 +0000 UTC m=+1.198450284 container died 900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pare, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:34:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a1e7a5585910f9203c387c24214edf5424f052b130dad75064fe241e4ea15ac-merged.mount: Deactivated successfully.
Nov 29 05:34:33 compute-0 podman[261585]: 2025-11-29 05:34:33.002345071 +0000 UTC m=+1.262199813 container remove 900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_pare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:34:33 compute-0 systemd[1]: libpod-conmon-900460cafb279f09a7b32ed0b665d614de64e6b27802b68b63ca287530deb4a9.scope: Deactivated successfully.
Nov 29 05:34:33 compute-0 sudo[261454]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:33 compute-0 sudo[261645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:34:33 compute-0 sudo[261645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:33 compute-0 sudo[261645]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:33 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.csskcz(active, since 25m)
Nov 29 05:34:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "format": "json"}]: dispatch
Nov 29 05:34:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "86dc64fd-e983-41fb-88c2-0ca9782c4406", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "format": "json"}]: dispatch
Nov 29 05:34:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "139756d1-c4a7-4d9e-860e-88e58c898640", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:33 compute-0 ceph-mon[75176]: pgmap v891: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Nov 29 05:34:33 compute-0 sudo[261670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:34:33 compute-0 sudo[261670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:33 compute-0 sudo[261670]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:33 compute-0 sudo[261695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:34:33 compute-0 sudo[261695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:33 compute-0 sudo[261695]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:33 compute-0 sudo[261720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:34:33 compute-0 sudo[261720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:33 compute-0 podman[261783]: 2025-11-29 05:34:33.619427771 +0000 UTC m=+0.037564212 container create d00340b078dfa587709e7d5458500ae20ae5df6cc14c70a9d7d55b0db8a4f120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lichterman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:34:33 compute-0 systemd[1]: Started libpod-conmon-d00340b078dfa587709e7d5458500ae20ae5df6cc14c70a9d7d55b0db8a4f120.scope.
Nov 29 05:34:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:34:33 compute-0 podman[261783]: 2025-11-29 05:34:33.695735283 +0000 UTC m=+0.113871744 container init d00340b078dfa587709e7d5458500ae20ae5df6cc14c70a9d7d55b0db8a4f120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:34:33 compute-0 podman[261783]: 2025-11-29 05:34:33.603364336 +0000 UTC m=+0.021500837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:34:33 compute-0 podman[261783]: 2025-11-29 05:34:33.701242675 +0000 UTC m=+0.119379116 container start d00340b078dfa587709e7d5458500ae20ae5df6cc14c70a9d7d55b0db8a4f120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:34:33 compute-0 podman[261783]: 2025-11-29 05:34:33.70393905 +0000 UTC m=+0.122075491 container attach d00340b078dfa587709e7d5458500ae20ae5df6cc14c70a9d7d55b0db8a4f120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 05:34:33 compute-0 elastic_lichterman[261799]: 167 167
Nov 29 05:34:33 compute-0 systemd[1]: libpod-d00340b078dfa587709e7d5458500ae20ae5df6cc14c70a9d7d55b0db8a4f120.scope: Deactivated successfully.
Nov 29 05:34:33 compute-0 podman[261783]: 2025-11-29 05:34:33.70523516 +0000 UTC m=+0.123371601 container died d00340b078dfa587709e7d5458500ae20ae5df6cc14c70a9d7d55b0db8a4f120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lichterman, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:34:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7dd415bea35f9b7c5d4e0590824085df14d3e23526abd8c29d3b94b2de40c8a-merged.mount: Deactivated successfully.
Nov 29 05:34:33 compute-0 podman[261783]: 2025-11-29 05:34:33.737741481 +0000 UTC m=+0.155877922 container remove d00340b078dfa587709e7d5458500ae20ae5df6cc14c70a9d7d55b0db8a4f120 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lichterman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:34:33 compute-0 systemd[1]: libpod-conmon-d00340b078dfa587709e7d5458500ae20ae5df6cc14c70a9d7d55b0db8a4f120.scope: Deactivated successfully.
Nov 29 05:34:33 compute-0 podman[261822]: 2025-11-29 05:34:33.930087657 +0000 UTC m=+0.051005035 container create 4c18376af21363b08d707a46890fd7c9d4c6c9703a0bb47ba381cbbbf2318d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:34:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Nov 29 05:34:33 compute-0 systemd[1]: Started libpod-conmon-4c18376af21363b08d707a46890fd7c9d4c6c9703a0bb47ba381cbbbf2318d04.scope.
Nov 29 05:34:33 compute-0 podman[261822]: 2025-11-29 05:34:33.901199493 +0000 UTC m=+0.022116951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:34:34 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea3703f88f0f2faa97f49118de749b31bfcfa731eaeb69b55ae32257e45438a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea3703f88f0f2faa97f49118de749b31bfcfa731eaeb69b55ae32257e45438a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea3703f88f0f2faa97f49118de749b31bfcfa731eaeb69b55ae32257e45438a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bea3703f88f0f2faa97f49118de749b31bfcfa731eaeb69b55ae32257e45438a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:34 compute-0 podman[261822]: 2025-11-29 05:34:34.02686729 +0000 UTC m=+0.147784668 container init 4c18376af21363b08d707a46890fd7c9d4c6c9703a0bb47ba381cbbbf2318d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 05:34:34 compute-0 podman[261822]: 2025-11-29 05:34:34.032832173 +0000 UTC m=+0.153749551 container start 4c18376af21363b08d707a46890fd7c9d4c6c9703a0bb47ba381cbbbf2318d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:34:34 compute-0 podman[261822]: 2025-11-29 05:34:34.035567218 +0000 UTC m=+0.156484596 container attach 4c18376af21363b08d707a46890fd7c9d4c6c9703a0bb47ba381cbbbf2318d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:34:34 compute-0 ceph-mon[75176]: mgrmap e11: compute-0.csskcz(active, since 25m)
Nov 29 05:34:34 compute-0 ceph-mon[75176]: pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.8 KiB/s wr, 1 op/s
Nov 29 05:34:34 compute-0 thirsty_spence[261838]: {
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:     "0": [
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:         {
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "devices": [
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "/dev/loop3"
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             ],
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_name": "ceph_lv0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_size": "21470642176",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "name": "ceph_lv0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "tags": {
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.cluster_name": "ceph",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.crush_device_class": "",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.encrypted": "0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.osd_id": "0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.type": "block",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.vdo": "0"
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             },
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "type": "block",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "vg_name": "ceph_vg0"
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:         }
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:     ],
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:     "1": [
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:         {
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "devices": [
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "/dev/loop4"
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             ],
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_name": "ceph_lv1",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_size": "21470642176",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "name": "ceph_lv1",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "tags": {
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.cluster_name": "ceph",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.crush_device_class": "",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.encrypted": "0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.osd_id": "1",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.type": "block",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.vdo": "0"
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             },
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "type": "block",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "vg_name": "ceph_vg1"
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:         }
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:     ],
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:     "2": [
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:         {
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "devices": [
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "/dev/loop5"
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             ],
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_name": "ceph_lv2",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_size": "21470642176",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "name": "ceph_lv2",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "tags": {
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.cluster_name": "ceph",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.crush_device_class": "",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.encrypted": "0",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.osd_id": "2",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.type": "block",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:                 "ceph.vdo": "0"
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             },
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "type": "block",
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:             "vg_name": "ceph_vg2"
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:         }
Nov 29 05:34:34 compute-0 thirsty_spence[261838]:     ]
Nov 29 05:34:34 compute-0 thirsty_spence[261838]: }
Nov 29 05:34:34 compute-0 systemd[1]: libpod-4c18376af21363b08d707a46890fd7c9d4c6c9703a0bb47ba381cbbbf2318d04.scope: Deactivated successfully.
Nov 29 05:34:34 compute-0 podman[261822]: 2025-11-29 05:34:34.797680509 +0000 UTC m=+0.918597927 container died 4c18376af21363b08d707a46890fd7c9d4c6c9703a0bb47ba381cbbbf2318d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 05:34:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-bea3703f88f0f2faa97f49118de749b31bfcfa731eaeb69b55ae32257e45438a-merged.mount: Deactivated successfully.
Nov 29 05:34:34 compute-0 podman[261822]: 2025-11-29 05:34:34.8581222 +0000 UTC m=+0.979039578 container remove 4c18376af21363b08d707a46890fd7c9d4c6c9703a0bb47ba381cbbbf2318d04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_spence, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:34:34 compute-0 systemd[1]: libpod-conmon-4c18376af21363b08d707a46890fd7c9d4c6c9703a0bb47ba381cbbbf2318d04.scope: Deactivated successfully.
Nov 29 05:34:34 compute-0 sudo[261720]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:34 compute-0 sudo[261861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:34:34 compute-0 sudo[261861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:34 compute-0 sudo[261861]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:35 compute-0 sudo[261886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:34:35 compute-0 sudo[261886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:35 compute-0 sudo[261886]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:35 compute-0 sudo[261911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:34:35 compute-0 sudo[261911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:35 compute-0 sudo[261911]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:35 compute-0 sudo[261936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:34:35 compute-0 sudo[261936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:35 compute-0 podman[262000]: 2025-11-29 05:34:35.458248102 +0000 UTC m=+0.034223362 container create 56bd2a232f43b590c23333368130353d743b56a2b6b2a0d320106af0b1bc9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:34:35 compute-0 systemd[1]: Started libpod-conmon-56bd2a232f43b590c23333368130353d743b56a2b6b2a0d320106af0b1bc9352.scope.
Nov 29 05:34:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:34:35 compute-0 podman[262000]: 2025-11-29 05:34:35.528720804 +0000 UTC m=+0.104696094 container init 56bd2a232f43b590c23333368130353d743b56a2b6b2a0d320106af0b1bc9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 05:34:35 compute-0 podman[262000]: 2025-11-29 05:34:35.538598011 +0000 UTC m=+0.114573281 container start 56bd2a232f43b590c23333368130353d743b56a2b6b2a0d320106af0b1bc9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 05:34:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "18c23ac3-9de1-4499-a61d-bb17aaae0f7d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:18c23ac3-9de1-4499-a61d-bb17aaae0f7d, vol_name:cephfs) < ""
Nov 29 05:34:35 compute-0 podman[262000]: 2025-11-29 05:34:35.44400458 +0000 UTC m=+0.019979880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:34:35 compute-0 podman[262000]: 2025-11-29 05:34:35.54146472 +0000 UTC m=+0.117440000 container attach 56bd2a232f43b590c23333368130353d743b56a2b6b2a0d320106af0b1bc9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 05:34:35 compute-0 stoic_shaw[262015]: 167 167
Nov 29 05:34:35 compute-0 systemd[1]: libpod-56bd2a232f43b590c23333368130353d743b56a2b6b2a0d320106af0b1bc9352.scope: Deactivated successfully.
Nov 29 05:34:35 compute-0 podman[262000]: 2025-11-29 05:34:35.544909202 +0000 UTC m=+0.120884482 container died 56bd2a232f43b590c23333368130353d743b56a2b6b2a0d320106af0b1bc9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 05:34:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/18c23ac3-9de1-4499-a61d-bb17aaae0f7d/.meta.tmp'
Nov 29 05:34:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/18c23ac3-9de1-4499-a61d-bb17aaae0f7d/.meta.tmp' to config b'/volumes/_nogroup/18c23ac3-9de1-4499-a61d-bb17aaae0f7d/.meta'
Nov 29 05:34:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-765d136502e98adf98a52a1c0cfd054bbfc4bb2e4e1aa6bdcbf2f636fd81c756-merged.mount: Deactivated successfully.
Nov 29 05:34:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:18c23ac3-9de1-4499-a61d-bb17aaae0f7d, vol_name:cephfs) < ""
Nov 29 05:34:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "18c23ac3-9de1-4499-a61d-bb17aaae0f7d", "format": "json"}]: dispatch
Nov 29 05:34:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18c23ac3-9de1-4499-a61d-bb17aaae0f7d, vol_name:cephfs) < ""
Nov 29 05:34:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:18c23ac3-9de1-4499-a61d-bb17aaae0f7d, vol_name:cephfs) < ""
Nov 29 05:34:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:34:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:35 compute-0 podman[262000]: 2025-11-29 05:34:35.58397584 +0000 UTC m=+0.159951110 container remove 56bd2a232f43b590c23333368130353d743b56a2b6b2a0d320106af0b1bc9352 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:34:35 compute-0 systemd[1]: libpod-conmon-56bd2a232f43b590c23333368130353d743b56a2b6b2a0d320106af0b1bc9352.scope: Deactivated successfully.
Nov 29 05:34:35 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:35 compute-0 podman[262041]: 2025-11-29 05:34:35.761415899 +0000 UTC m=+0.049560131 container create f4ad01cfebbed16b272840be7e2d1af96125dbb7596c6e0c2eb0599108ca35a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ride, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:34:35 compute-0 systemd[1]: Started libpod-conmon-f4ad01cfebbed16b272840be7e2d1af96125dbb7596c6e0c2eb0599108ca35a8.scope.
Nov 29 05:34:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39d612faf8bf0bab3ef85288131a788a0cc12c9d7923cf6933adf892260def7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39d612faf8bf0bab3ef85288131a788a0cc12c9d7923cf6933adf892260def7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39d612faf8bf0bab3ef85288131a788a0cc12c9d7923cf6933adf892260def7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b39d612faf8bf0bab3ef85288131a788a0cc12c9d7923cf6933adf892260def7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:34:35 compute-0 podman[262041]: 2025-11-29 05:34:35.745695781 +0000 UTC m=+0.033840013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:34:35 compute-0 podman[262041]: 2025-11-29 05:34:35.853811196 +0000 UTC m=+0.141955408 container init f4ad01cfebbed16b272840be7e2d1af96125dbb7596c6e0c2eb0599108ca35a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ride, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:34:35 compute-0 podman[262041]: 2025-11-29 05:34:35.861928491 +0000 UTC m=+0.150072693 container start f4ad01cfebbed16b272840be7e2d1af96125dbb7596c6e0c2eb0599108ca35a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:34:35 compute-0 podman[262041]: 2025-11-29 05:34:35.865028895 +0000 UTC m=+0.153173117 container attach f4ad01cfebbed16b272840be7e2d1af96125dbb7596c6e0c2eb0599108ca35a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ride, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:34:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 4 op/s
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "18c23ac3-9de1-4499-a61d-bb17aaae0f7d", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:18c23ac3-9de1-4499-a61d-bb17aaae0f7d, vol_name:cephfs) < ""
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:18c23ac3-9de1-4499-a61d-bb17aaae0f7d, vol_name:cephfs) < ""
Nov 29 05:34:36 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "18c23ac3-9de1-4499-a61d-bb17aaae0f7d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:36 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "18c23ac3-9de1-4499-a61d-bb17aaae0f7d", "format": "json"}]: dispatch
Nov 29 05:34:36 compute-0 ceph-mon[75176]: pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 13 KiB/s wr, 4 op/s
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "00bcce59-7712-4975-bf1e-f275f12b7d66", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:00bcce59-7712-4975-bf1e-f275f12b7d66, vol_name:cephfs) < ""
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/00bcce59-7712-4975-bf1e-f275f12b7d66/.meta.tmp'
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/00bcce59-7712-4975-bf1e-f275f12b7d66/.meta.tmp' to config b'/volumes/_nogroup/00bcce59-7712-4975-bf1e-f275f12b7d66/.meta'
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:00bcce59-7712-4975-bf1e-f275f12b7d66, vol_name:cephfs) < ""
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "00bcce59-7712-4975-bf1e-f275f12b7d66", "format": "json"}]: dispatch
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:00bcce59-7712-4975-bf1e-f275f12b7d66, vol_name:cephfs) < ""
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:00bcce59-7712-4975-bf1e-f275f12b7d66, vol_name:cephfs) < ""
Nov 29 05:34:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:34:36 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:36 compute-0 blissful_ride[262058]: {
Nov 29 05:34:36 compute-0 blissful_ride[262058]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "osd_id": 0,
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "type": "bluestore"
Nov 29 05:34:36 compute-0 blissful_ride[262058]:     },
Nov 29 05:34:36 compute-0 blissful_ride[262058]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "osd_id": 1,
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "type": "bluestore"
Nov 29 05:34:36 compute-0 blissful_ride[262058]:     },
Nov 29 05:34:36 compute-0 blissful_ride[262058]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "osd_id": 2,
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:34:36 compute-0 blissful_ride[262058]:         "type": "bluestore"
Nov 29 05:34:36 compute-0 blissful_ride[262058]:     }
Nov 29 05:34:36 compute-0 blissful_ride[262058]: }
Nov 29 05:34:36 compute-0 systemd[1]: libpod-f4ad01cfebbed16b272840be7e2d1af96125dbb7596c6e0c2eb0599108ca35a8.scope: Deactivated successfully.
Nov 29 05:34:36 compute-0 systemd[1]: libpod-f4ad01cfebbed16b272840be7e2d1af96125dbb7596c6e0c2eb0599108ca35a8.scope: Consumed 1.031s CPU time.
Nov 29 05:34:36 compute-0 podman[262041]: 2025-11-29 05:34:36.885429805 +0000 UTC m=+1.173574067 container died f4ad01cfebbed16b272840be7e2d1af96125dbb7596c6e0c2eb0599108ca35a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ride, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:34:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b39d612faf8bf0bab3ef85288131a788a0cc12c9d7923cf6933adf892260def7-merged.mount: Deactivated successfully.
Nov 29 05:34:36 compute-0 podman[262041]: 2025-11-29 05:34:36.940384633 +0000 UTC m=+1.228528835 container remove f4ad01cfebbed16b272840be7e2d1af96125dbb7596c6e0c2eb0599108ca35a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_ride, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 05:34:36 compute-0 systemd[1]: libpod-conmon-f4ad01cfebbed16b272840be7e2d1af96125dbb7596c6e0c2eb0599108ca35a8.scope: Deactivated successfully.
Nov 29 05:34:36 compute-0 sudo[261936]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:34:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:34:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:34:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 7a04f279-5e6d-437e-9740-71f4a985e25f does not exist
Nov 29 05:34:36 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a766314d-5ec5-4a5e-8f40-ecc17a785869 does not exist
Nov 29 05:34:37 compute-0 sudo[262105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:34:37 compute-0 sudo[262105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:37 compute-0 sudo[262105]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:37 compute-0 sudo[262130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:34:37 compute-0 sudo[262130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:34:37 compute-0 sudo[262130]: pam_unix(sudo:session): session closed for user root
Nov 29 05:34:37 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "18c23ac3-9de1-4499-a61d-bb17aaae0f7d", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 29 05:34:37 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "00bcce59-7712-4975-bf1e-f275f12b7d66", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:37 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "00bcce59-7712-4975-bf1e-f275f12b7d66", "format": "json"}]: dispatch
Nov 29 05:34:37 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:34:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:34:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 3 op/s
Nov 29 05:34:38 compute-0 ceph-mon[75176]: pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 3 op/s
Nov 29 05:34:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 16 KiB/s wr, 5 op/s
Nov 29 05:34:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "18c23ac3-9de1-4499-a61d-bb17aaae0f7d", "format": "json"}]: dispatch
Nov 29 05:34:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:18c23ac3-9de1-4499-a61d-bb17aaae0f7d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:18c23ac3-9de1-4499-a61d-bb17aaae0f7d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:40 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18c23ac3-9de1-4499-a61d-bb17aaae0f7d' of type subvolume
Nov 29 05:34:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:40.030+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '18c23ac3-9de1-4499-a61d-bb17aaae0f7d' of type subvolume
Nov 29 05:34:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "18c23ac3-9de1-4499-a61d-bb17aaae0f7d", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18c23ac3-9de1-4499-a61d-bb17aaae0f7d, vol_name:cephfs) < ""
Nov 29 05:34:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/18c23ac3-9de1-4499-a61d-bb17aaae0f7d'' moved to trashcan
Nov 29 05:34:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:34:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:18c23ac3-9de1-4499-a61d-bb17aaae0f7d, vol_name:cephfs) < ""
Nov 29 05:34:41 compute-0 ceph-mon[75176]: pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 16 KiB/s wr, 5 op/s
Nov 29 05:34:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "18c23ac3-9de1-4499-a61d-bb17aaae0f7d", "format": "json"}]: dispatch
Nov 29 05:34:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "18c23ac3-9de1-4499-a61d-bb17aaae0f7d", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:34:41
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'backups', 'vms', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control']
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:34:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 3 op/s
Nov 29 05:34:43 compute-0 ceph-mon[75176]: pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 3 op/s
Nov 29 05:34:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58d0834c-039f-43fa-9037-b2e41fbfffb3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58d0834c-039f-43fa-9037-b2e41fbfffb3, vol_name:cephfs) < ""
Nov 29 05:34:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/58d0834c-039f-43fa-9037-b2e41fbfffb3/.meta.tmp'
Nov 29 05:34:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/58d0834c-039f-43fa-9037-b2e41fbfffb3/.meta.tmp' to config b'/volumes/_nogroup/58d0834c-039f-43fa-9037-b2e41fbfffb3/.meta'
Nov 29 05:34:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58d0834c-039f-43fa-9037-b2e41fbfffb3, vol_name:cephfs) < ""
Nov 29 05:34:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58d0834c-039f-43fa-9037-b2e41fbfffb3", "format": "json"}]: dispatch
Nov 29 05:34:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58d0834c-039f-43fa-9037-b2e41fbfffb3, vol_name:cephfs) < ""
Nov 29 05:34:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58d0834c-039f-43fa-9037-b2e41fbfffb3, vol_name:cephfs) < ""
Nov 29 05:34:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:34:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 3 op/s
Nov 29 05:34:44 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "00bcce59-7712-4975-bf1e-f275f12b7d66", "format": "json"}]: dispatch
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:00bcce59-7712-4975-bf1e-f275f12b7d66, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:00bcce59-7712-4975-bf1e-f275f12b7d66, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '00bcce59-7712-4975-bf1e-f275f12b7d66' of type subvolume
Nov 29 05:34:44 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:44.249+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '00bcce59-7712-4975-bf1e-f275f12b7d66' of type subvolume
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "00bcce59-7712-4975-bf1e-f275f12b7d66", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:00bcce59-7712-4975-bf1e-f275f12b7d66, vol_name:cephfs) < ""
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/00bcce59-7712-4975-bf1e-f275f12b7d66'' moved to trashcan
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:00bcce59-7712-4975-bf1e-f275f12b7d66, vol_name:cephfs) < ""
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58d0834c-039f-43fa-9037-b2e41fbfffb3", "format": "json"}]: dispatch
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:58d0834c-039f-43fa-9037-b2e41fbfffb3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:58d0834c-039f-43fa-9037-b2e41fbfffb3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:44 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:44.580+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58d0834c-039f-43fa-9037-b2e41fbfffb3' of type subvolume
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58d0834c-039f-43fa-9037-b2e41fbfffb3' of type subvolume
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58d0834c-039f-43fa-9037-b2e41fbfffb3", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58d0834c-039f-43fa-9037-b2e41fbfffb3, vol_name:cephfs) < ""
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/58d0834c-039f-43fa-9037-b2e41fbfffb3'' moved to trashcan
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:34:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58d0834c-039f-43fa-9037-b2e41fbfffb3, vol_name:cephfs) < ""
Nov 29 05:34:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58d0834c-039f-43fa-9037-b2e41fbfffb3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58d0834c-039f-43fa-9037-b2e41fbfffb3", "format": "json"}]: dispatch
Nov 29 05:34:45 compute-0 ceph-mon[75176]: pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 11 KiB/s wr, 3 op/s
Nov 29 05:34:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 18 KiB/s wr, 6 op/s
Nov 29 05:34:46 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "00bcce59-7712-4975-bf1e-f275f12b7d66", "format": "json"}]: dispatch
Nov 29 05:34:46 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "00bcce59-7712-4975-bf1e-f275f12b7d66", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:46 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58d0834c-039f-43fa-9037-b2e41fbfffb3", "format": "json"}]: dispatch
Nov 29 05:34:46 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58d0834c-039f-43fa-9037-b2e41fbfffb3", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:47 compute-0 podman[262155]: 2025-11-29 05:34:47.013957249 +0000 UTC m=+0.065516634 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 05:34:47 compute-0 ceph-mon[75176]: pgmap v898: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 18 KiB/s wr, 6 op/s
Nov 29 05:34:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 12 KiB/s wr, 3 op/s
Nov 29 05:34:48 compute-0 ceph-mon[75176]: pgmap v899: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 12 KiB/s wr, 3 op/s
Nov 29 05:34:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c849ef50-3cb1-498e-ae97-f5cb56db1715", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c849ef50-3cb1-498e-ae97-f5cb56db1715, vol_name:cephfs) < ""
Nov 29 05:34:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c849ef50-3cb1-498e-ae97-f5cb56db1715/.meta.tmp'
Nov 29 05:34:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c849ef50-3cb1-498e-ae97-f5cb56db1715/.meta.tmp' to config b'/volumes/_nogroup/c849ef50-3cb1-498e-ae97-f5cb56db1715/.meta'
Nov 29 05:34:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c849ef50-3cb1-498e-ae97-f5cb56db1715, vol_name:cephfs) < ""
Nov 29 05:34:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c849ef50-3cb1-498e-ae97-f5cb56db1715", "format": "json"}]: dispatch
Nov 29 05:34:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c849ef50-3cb1-498e-ae97-f5cb56db1715, vol_name:cephfs) < ""
Nov 29 05:34:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c849ef50-3cb1-498e-ae97-f5cb56db1715, vol_name:cephfs) < ""
Nov 29 05:34:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:34:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:49 compute-0 sshd-session[262175]: Invalid user andy from 45.120.216.232 port 37674
Nov 29 05:34:49 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:34:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 16 KiB/s wr, 5 op/s
Nov 29 05:34:50 compute-0 sshd-session[262175]: Received disconnect from 45.120.216.232 port 37674:11: Bye Bye [preauth]
Nov 29 05:34:50 compute-0 sshd-session[262175]: Disconnected from invalid user andy 45.120.216.232 port 37674 [preauth]
Nov 29 05:34:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c849ef50-3cb1-498e-ae97-f5cb56db1715", "format": "json"}]: dispatch
Nov 29 05:34:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c849ef50-3cb1-498e-ae97-f5cb56db1715, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c849ef50-3cb1-498e-ae97-f5cb56db1715, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:34:50 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:34:50.274+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c849ef50-3cb1-498e-ae97-f5cb56db1715' of type subvolume
Nov 29 05:34:50 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c849ef50-3cb1-498e-ae97-f5cb56db1715' of type subvolume
Nov 29 05:34:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c849ef50-3cb1-498e-ae97-f5cb56db1715", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c849ef50-3cb1-498e-ae97-f5cb56db1715, vol_name:cephfs) < ""
Nov 29 05:34:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c849ef50-3cb1-498e-ae97-f5cb56db1715'' moved to trashcan
Nov 29 05:34:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:34:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c849ef50-3cb1-498e-ae97-f5cb56db1715, vol_name:cephfs) < ""
Nov 29 05:34:50 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c849ef50-3cb1-498e-ae97-f5cb56db1715", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:34:50 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c849ef50-3cb1-498e-ae97-f5cb56db1715", "format": "json"}]: dispatch
Nov 29 05:34:50 compute-0 ceph-mon[75176]: pgmap v900: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 16 KiB/s wr, 5 op/s
Nov 29 05:34:51 compute-0 podman[262177]: 2025-11-29 05:34:51.057673236 +0000 UTC m=+0.111859146 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.023665917822492e-06 of space, bias 4.0, pg target 0.00602839910138699 quantized to 16 (current 16)
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:34:51 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c849ef50-3cb1-498e-ae97-f5cb56db1715", "format": "json"}]: dispatch
Nov 29 05:34:51 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c849ef50-3cb1-498e-ae97-f5cb56db1715", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 11 KiB/s wr, 4 op/s
Nov 29 05:34:52 compute-0 ceph-mon[75176]: pgmap v901: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 11 KiB/s wr, 4 op/s
Nov 29 05:34:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 11 KiB/s wr, 4 op/s
Nov 29 05:34:54 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "9f4ca2d4-2a6d-40ec-a151-65908d28e8e3", "format": "json"}]: dispatch
Nov 29 05:34:54 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9f4ca2d4-2a6d-40ec-a151-65908d28e8e3, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:34:54 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9f4ca2d4-2a6d-40ec-a151-65908d28e8e3, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:34:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:55 compute-0 ceph-mon[75176]: pgmap v902: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 11 KiB/s wr, 4 op/s
Nov 29 05:34:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 15 KiB/s wr, 5 op/s
Nov 29 05:34:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "9f4ca2d4-2a6d-40ec-a151-65908d28e8e3_4bb539df-e3c9-4ce9-845d-0ec16051653c", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9f4ca2d4-2a6d-40ec-a151-65908d28e8e3_4bb539df-e3c9-4ce9-845d-0ec16051653c, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:34:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:34:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:34:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9f4ca2d4-2a6d-40ec-a151-65908d28e8e3_4bb539df-e3c9-4ce9-845d-0ec16051653c, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:34:56 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "9f4ca2d4-2a6d-40ec-a151-65908d28e8e3", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9f4ca2d4-2a6d-40ec-a151-65908d28e8e3, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:34:56 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "9f4ca2d4-2a6d-40ec-a151-65908d28e8e3", "format": "json"}]: dispatch
Nov 29 05:34:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:34:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:34:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9f4ca2d4-2a6d-40ec-a151-65908d28e8e3, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:34:57 compute-0 ceph-mon[75176]: pgmap v903: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 15 KiB/s wr, 5 op/s
Nov 29 05:34:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "9f4ca2d4-2a6d-40ec-a151-65908d28e8e3_4bb539df-e3c9-4ce9-845d-0ec16051653c", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "9f4ca2d4-2a6d-40ec-a151-65908d28e8e3", "force": true, "format": "json"}]: dispatch
Nov 29 05:34:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 8.2 KiB/s wr, 3 op/s
Nov 29 05:34:59 compute-0 ceph-mon[75176]: pgmap v904: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 8.2 KiB/s wr, 3 op/s
Nov 29 05:34:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.930091) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394499930137, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 732, "num_deletes": 255, "total_data_size": 938349, "memory_usage": 952968, "flush_reason": "Manual Compaction"}
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394499937755, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 930732, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18785, "largest_seqno": 19516, "table_properties": {"data_size": 926929, "index_size": 1519, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8816, "raw_average_key_size": 18, "raw_value_size": 919033, "raw_average_value_size": 1959, "num_data_blocks": 68, "num_entries": 469, "num_filter_entries": 469, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394455, "oldest_key_time": 1764394455, "file_creation_time": 1764394499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 7695 microseconds, and 4269 cpu microseconds.
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.937790) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 930732 bytes OK
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.937807) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.939148) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.939167) EVENT_LOG_v1 {"time_micros": 1764394499939161, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.939184) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 934458, prev total WAL file size 934458, number of live WAL files 2.
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.939691) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353031' seq:0, type:0; will stop at (end)
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(908KB)], [44(6174KB)]
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394499939728, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7253870, "oldest_snapshot_seqno": -1}
Nov 29 05:34:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 13 KiB/s wr, 5 op/s
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4161 keys, 7131774 bytes, temperature: kUnknown
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394499979130, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7131774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7103392, "index_size": 16880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 103159, "raw_average_key_size": 24, "raw_value_size": 7027497, "raw_average_value_size": 1688, "num_data_blocks": 708, "num_entries": 4161, "num_filter_entries": 4161, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.979399) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7131774 bytes
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.980936) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.8 rd, 180.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.0 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(15.5) write-amplify(7.7) OK, records in: 4686, records dropped: 525 output_compression: NoCompression
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.980983) EVENT_LOG_v1 {"time_micros": 1764394499980963, "job": 22, "event": "compaction_finished", "compaction_time_micros": 39475, "compaction_time_cpu_micros": 20487, "output_level": 6, "num_output_files": 1, "total_output_size": 7131774, "num_input_records": 4686, "num_output_records": 4161, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394499981336, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394499982400, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.939609) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.982446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.982452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.982454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.982456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:34:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:34:59.982457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:35:00 compute-0 podman[262204]: 2025-11-29 05:35:00.001336795 +0000 UTC m=+0.046336723 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 05:35:00 compute-0 ceph-mon[75176]: pgmap v905: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 13 KiB/s wr, 5 op/s
Nov 29 05:35:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 8.4 KiB/s wr, 3 op/s
Nov 29 05:35:03 compute-0 ceph-mon[75176]: pgmap v906: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 8.4 KiB/s wr, 3 op/s
Nov 29 05:35:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 29 05:35:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 8.2 KiB/s wr, 3 op/s
Nov 29 05:35:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 29 05:35:04 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 29 05:35:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:35:05 compute-0 ceph-mon[75176]: pgmap v907: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 8.2 KiB/s wr, 3 op/s
Nov 29 05:35:05 compute-0 ceph-mon[75176]: osdmap e126: 3 total, 3 up, 3 in
Nov 29 05:35:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 8.0 KiB/s wr, 2 op/s
Nov 29 05:35:06 compute-0 ceph-mon[75176]: pgmap v909: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 8.0 KiB/s wr, 2 op/s
Nov 29 05:35:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 8.0 KiB/s wr, 2 op/s
Nov 29 05:35:08 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:08 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:08 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta.tmp'
Nov 29 05:35:08 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta.tmp' to config b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta'
Nov 29 05:35:08 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:08 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "format": "json"}]: dispatch
Nov 29 05:35:08 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:08 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:35:08 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:09 compute-0 ceph-mon[75176]: pgmap v910: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 8.0 KiB/s wr, 2 op/s
Nov 29 05:35:09 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s wr, 1 op/s
Nov 29 05:35:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:35:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0626d330-257a-411d-9af3-ea14cd2279db", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:0626d330-257a-411d-9af3-ea14cd2279db, vol_name:cephfs) < ""
Nov 29 05:35:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0626d330-257a-411d-9af3-ea14cd2279db/.meta.tmp'
Nov 29 05:35:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0626d330-257a-411d-9af3-ea14cd2279db/.meta.tmp' to config b'/volumes/_nogroup/0626d330-257a-411d-9af3-ea14cd2279db/.meta'
Nov 29 05:35:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:0626d330-257a-411d-9af3-ea14cd2279db, vol_name:cephfs) < ""
Nov 29 05:35:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0626d330-257a-411d-9af3-ea14cd2279db", "format": "json"}]: dispatch
Nov 29 05:35:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0626d330-257a-411d-9af3-ea14cd2279db, vol_name:cephfs) < ""
Nov 29 05:35:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0626d330-257a-411d-9af3-ea14cd2279db, vol_name:cephfs) < ""
Nov 29 05:35:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:35:10 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:10 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:10 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "format": "json"}]: dispatch
Nov 29 05:35:10 compute-0 ceph-mon[75176]: pgmap v911: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s wr, 1 op/s
Nov 29 05:35:10 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:35:11 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0626d330-257a-411d-9af3-ea14cd2279db", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:11 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0626d330-257a-411d-9af3-ea14cd2279db", "format": "json"}]: dispatch
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "aac92040-d3b9-4ca6-8113-38a011a7589d", "format": "json"}]: dispatch
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aac92040-d3b9-4ca6-8113-38a011a7589d, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aac92040-d3b9-4ca6-8113-38a011a7589d, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "snap_name": "8cf4c283-b297-4eb5-902b-efa757c8775f", "format": "json"}]: dispatch
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8cf4c283-b297-4eb5-902b-efa757c8775f, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8cf4c283-b297-4eb5-902b-efa757c8775f, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s wr, 1 op/s
Nov 29 05:35:12 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "aac92040-d3b9-4ca6-8113-38a011a7589d", "format": "json"}]: dispatch
Nov 29 05:35:12 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "snap_name": "8cf4c283-b297-4eb5-902b-efa757c8775f", "format": "json"}]: dispatch
Nov 29 05:35:12 compute-0 ceph-mon[75176]: pgmap v912: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s wr, 1 op/s
Nov 29 05:35:13 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c22e2d86-7063-410a-82af-ad809d954609", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:c22e2d86-7063-410a-82af-ad809d954609, vol_name:cephfs) < ""
Nov 29 05:35:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c22e2d86-7063-410a-82af-ad809d954609/.meta.tmp'
Nov 29 05:35:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c22e2d86-7063-410a-82af-ad809d954609/.meta.tmp' to config b'/volumes/_nogroup/c22e2d86-7063-410a-82af-ad809d954609/.meta'
Nov 29 05:35:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:c22e2d86-7063-410a-82af-ad809d954609, vol_name:cephfs) < ""
Nov 29 05:35:13 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c22e2d86-7063-410a-82af-ad809d954609", "format": "json"}]: dispatch
Nov 29 05:35:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c22e2d86-7063-410a-82af-ad809d954609, vol_name:cephfs) < ""
Nov 29 05:35:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c22e2d86-7063-410a-82af-ad809d954609, vol_name:cephfs) < ""
Nov 29 05:35:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:35:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:13 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:35:13.749 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:35:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:35:13.749 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:35:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:35:13.750 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:35:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s wr, 1 op/s
Nov 29 05:35:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:35:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3662167493' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:35:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:35:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3662167493' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:35:14 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c22e2d86-7063-410a-82af-ad809d954609", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:14 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c22e2d86-7063-410a-82af-ad809d954609", "format": "json"}]: dispatch
Nov 29 05:35:14 compute-0 ceph-mon[75176]: pgmap v913: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s wr, 1 op/s
Nov 29 05:35:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3662167493' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:35:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3662167493' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:35:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:35:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 29 05:35:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 29 05:35:15 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "snap_name": "8cf4c283-b297-4eb5-902b-efa757c8775f", "target_sub_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "format": "json"}]: dispatch
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:8cf4c283-b297-4eb5-902b-efa757c8775f, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, target_sub_name:08ff9271-7e61-496e-a296-8bfe1694c401, vol_name:cephfs) < ""
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta.tmp'
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta.tmp' to config b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta'
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.clone_index] tracking-id e238a17c-1d6c-47f9-bb57-1112ab9cdb2b for path b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401'
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta.tmp'
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta.tmp' to config b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta'
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:8cf4c283-b297-4eb5-902b-efa757c8775f, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, target_sub_name:08ff9271-7e61-496e-a296-8bfe1694c401, vol_name:cephfs) < ""
Nov 29 05:35:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:15.619+0000 7fa4cbdee640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:15.619+0000 7fa4cbdee640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:15.619+0000 7fa4cbdee640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:15.619+0000 7fa4cbdee640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:15.619+0000 7fa4cbdee640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "format": "json"}]: dispatch
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:08ff9271-7e61-496e-a296-8bfe1694c401, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:08ff9271-7e61-496e-a296-8bfe1694c401, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 08ff9271-7e61-496e-a296-8bfe1694c401)
Nov 29 05:35:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:15.635+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:15.635+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:15.635+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:15.635+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:15.635+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:35:15 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 08ff9271-7e61-496e-a296-8bfe1694c401) -- by 0 seconds
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta.tmp'
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta.tmp' to config b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta'
Nov 29 05:35:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 2 op/s
Nov 29 05:35:16 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "5fbba3e6-4a9f-416a-b875-dbe87783ac9f", "format": "json"}]: dispatch
Nov 29 05:35:16 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5fbba3e6-4a9f-416a-b875-dbe87783ac9f, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:16 compute-0 ceph-mon[75176]: osdmap e127: 3 total, 3 up, 3 in
Nov 29 05:35:16 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "snap_name": "8cf4c283-b297-4eb5-902b-efa757c8775f", "target_sub_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "format": "json"}]: dispatch
Nov 29 05:35:16 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "format": "json"}]: dispatch
Nov 29 05:35:16 compute-0 ceph-mon[75176]: pgmap v915: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 2 op/s
Nov 29 05:35:16 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "5fbba3e6-4a9f-416a-b875-dbe87783ac9f", "format": "json"}]: dispatch
Nov 29 05:35:17 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.csskcz(active, since 26m)
Nov 29 05:35:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.snap/8cf4c283-b297-4eb5-902b-efa757c8775f/36319a29-1977-4032-8a77-761ad8f11f63' to b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/861fda5e-e27f-4bc2-b355-4d8a0c58fc51'
Nov 29 05:35:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5fbba3e6-4a9f-416a-b875-dbe87783ac9f, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 2 op/s
Nov 29 05:35:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0626d330-257a-411d-9af3-ea14cd2279db", "format": "json"}]: dispatch
Nov 29 05:35:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0626d330-257a-411d-9af3-ea14cd2279db, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0626d330-257a-411d-9af3-ea14cd2279db, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0626d330-257a-411d-9af3-ea14cd2279db' of type subvolume
Nov 29 05:35:18 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:18.000+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0626d330-257a-411d-9af3-ea14cd2279db' of type subvolume
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0626d330-257a-411d-9af3-ea14cd2279db", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0626d330-257a-411d-9af3-ea14cd2279db, vol_name:cephfs) < ""
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0626d330-257a-411d-9af3-ea14cd2279db'' moved to trashcan
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0626d330-257a-411d-9af3-ea14cd2279db, vol_name:cephfs) < ""
Nov 29 05:35:18 compute-0 podman[262247]: 2025-11-29 05:35:18.040608933 +0000 UTC m=+0.081150889 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:35:18 compute-0 ceph-mon[75176]: mgrmap e12: compute-0.csskcz(active, since 26m)
Nov 29 05:35:18 compute-0 ceph-mon[75176]: pgmap v916: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 2 op/s
Nov 29 05:35:18 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0626d330-257a-411d-9af3-ea14cd2279db", "format": "json"}]: dispatch
Nov 29 05:35:18 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0626d330-257a-411d-9af3-ea14cd2279db", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta.tmp'
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta.tmp' to config b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta'
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.clone_index] untracking e238a17c-1d6c-47f9-bb57-1112ab9cdb2b
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta.tmp'
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta.tmp' to config b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta'
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta.tmp'
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta.tmp' to config b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401/.meta'
Nov 29 05:35:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 08ff9271-7e61-496e-a296-8bfe1694c401)
Nov 29 05:35:19 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "format": "json"}]: dispatch
Nov 29 05:35:19 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:08ff9271-7e61-496e-a296-8bfe1694c401, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "format": "json"}]: dispatch
Nov 29 05:35:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 30 KiB/s wr, 6 op/s
Nov 29 05:35:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:35:20 compute-0 ceph-mon[75176]: pgmap v917: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 30 KiB/s wr, 6 op/s
Nov 29 05:35:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 31 KiB/s wr, 6 op/s
Nov 29 05:35:22 compute-0 podman[262267]: 2025-11-29 05:35:22.027379473 +0000 UTC m=+0.083559722 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:35:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:08ff9271-7e61-496e-a296-8bfe1694c401, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:22 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "format": "json"}]: dispatch
Nov 29 05:35:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:08ff9271-7e61-496e-a296-8bfe1694c401, vol_name:cephfs) < ""
Nov 29 05:35:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:08ff9271-7e61-496e-a296-8bfe1694c401, vol_name:cephfs) < ""
Nov 29 05:35:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:35:22 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:23 compute-0 ceph-mon[75176]: pgmap v918: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 31 KiB/s wr, 6 op/s
Nov 29 05:35:23 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c22e2d86-7063-410a-82af-ad809d954609", "format": "json"}]: dispatch
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c22e2d86-7063-410a-82af-ad809d954609, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c22e2d86-7063-410a-82af-ad809d954609, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c22e2d86-7063-410a-82af-ad809d954609' of type subvolume
Nov 29 05:35:23 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:23.031+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c22e2d86-7063-410a-82af-ad809d954609' of type subvolume
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c22e2d86-7063-410a-82af-ad809d954609", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c22e2d86-7063-410a-82af-ad809d954609, vol_name:cephfs) < ""
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c22e2d86-7063-410a-82af-ad809d954609'' moved to trashcan
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c22e2d86-7063-410a-82af-ad809d954609, vol_name:cephfs) < ""
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "466ffbf3-2e61-457d-9a83-066aa5755061", "format": "json"}]: dispatch
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:466ffbf3-2e61-457d-9a83-066aa5755061, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:466ffbf3-2e61-457d-9a83-066aa5755061, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:23 compute-0 sshd-session[262294]: Invalid user tibero from 152.32.145.111 port 51794
Nov 29 05:35:23 compute-0 sshd-session[262294]: Received disconnect from 152.32.145.111 port 51794:11: Bye Bye [preauth]
Nov 29 05:35:23 compute-0 sshd-session[262294]: Disconnected from invalid user tibero 152.32.145.111 port 51794 [preauth]
Nov 29 05:35:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 31 KiB/s wr, 6 op/s
Nov 29 05:35:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "format": "json"}]: dispatch
Nov 29 05:35:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c22e2d86-7063-410a-82af-ad809d954609", "format": "json"}]: dispatch
Nov 29 05:35:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c22e2d86-7063-410a-82af-ad809d954609", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "466ffbf3-2e61-457d-9a83-066aa5755061", "format": "json"}]: dispatch
Nov 29 05:35:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72", "format": "json"}]: dispatch
Nov 29 05:35:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:25 compute-0 ceph-mon[75176]: pgmap v919: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 31 KiB/s wr, 6 op/s
Nov 29 05:35:25 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "a8afb0bd-caae-4e09-b7fc-59c77a02ac82", "format": "json"}]: dispatch
Nov 29 05:35:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a8afb0bd-caae-4e09-b7fc-59c77a02ac82, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a8afb0bd-caae-4e09-b7fc-59c77a02ac82, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:35:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 761 B/s rd, 29 KiB/s wr, 6 op/s
Nov 29 05:35:26 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72", "format": "json"}]: dispatch
Nov 29 05:35:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "a8afb0bd-caae-4e09-b7fc-59c77a02ac82_9bce4af4-714b-4d64-be4c-d1a28b6fa5f6", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8afb0bd-caae-4e09-b7fc-59c77a02ac82_9bce4af4-714b-4d64-be4c-d1a28b6fa5f6, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:35:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:35:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8afb0bd-caae-4e09-b7fc-59c77a02ac82_9bce4af4-714b-4d64-be4c-d1a28b6fa5f6, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "a8afb0bd-caae-4e09-b7fc-59c77a02ac82", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8afb0bd-caae-4e09-b7fc-59c77a02ac82, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:35:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:35:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a8afb0bd-caae-4e09-b7fc-59c77a02ac82, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:26 compute-0 nova_compute[254898]: 2025-11-29 05:35:26.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:35:27 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "a8afb0bd-caae-4e09-b7fc-59c77a02ac82", "format": "json"}]: dispatch
Nov 29 05:35:27 compute-0 ceph-mon[75176]: pgmap v920: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 761 B/s rd, 29 KiB/s wr, 6 op/s
Nov 29 05:35:27 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "a8afb0bd-caae-4e09-b7fc-59c77a02ac82_9bce4af4-714b-4d64-be4c-d1a28b6fa5f6", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:27 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "a8afb0bd-caae-4e09-b7fc-59c77a02ac82", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 26 KiB/s wr, 6 op/s
Nov 29 05:35:28 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:35:28.005 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:35:28 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:35:28.006 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 05:35:28 compute-0 nova_compute[254898]: 2025-11-29 05:35:28.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:35:28 compute-0 nova_compute[254898]: 2025-11-29 05:35:28.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:35:29 compute-0 ceph-mon[75176]: pgmap v921: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 26 KiB/s wr, 6 op/s
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "format": "json"}]: dispatch
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:08ff9271-7e61-496e-a296-8bfe1694c401, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:08ff9271-7e61-496e-a296-8bfe1694c401, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:08ff9271-7e61-496e-a296-8bfe1694c401, vol_name:cephfs) < ""
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/08ff9271-7e61-496e-a296-8bfe1694c401'' moved to trashcan
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:08ff9271-7e61-496e-a296-8bfe1694c401, vol_name:cephfs) < ""
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72_33f581df-12c0-4550-9aaa-8aaaa30382e3", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72_33f581df-12c0-4550-9aaa-8aaaa30382e3, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72_33f581df-12c0-4550-9aaa-8aaaa30382e3, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:35:29 compute-0 nova_compute[254898]: 2025-11-29 05:35:29.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 41 KiB/s wr, 8 op/s
Nov 29 05:35:29 compute-0 nova_compute[254898]: 2025-11-29 05:35:29.986 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:35:29 compute-0 nova_compute[254898]: 2025-11-29 05:35:29.987 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:35:29 compute-0 nova_compute[254898]: 2025-11-29 05:35:29.987 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:35:29 compute-0 nova_compute[254898]: 2025-11-29 05:35:29.987 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:35:29 compute-0 nova_compute[254898]: 2025-11-29 05:35:29.987 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:35:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:35:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:35:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244575321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:35:30 compute-0 nova_compute[254898]: 2025-11-29 05:35:30.438 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:35:30 compute-0 nova_compute[254898]: 2025-11-29 05:35:30.586 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:35:30 compute-0 nova_compute[254898]: 2025-11-29 05:35:30.587 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5161MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:35:30 compute-0 nova_compute[254898]: 2025-11-29 05:35:30.588 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:35:30 compute-0 nova_compute[254898]: 2025-11-29 05:35:30.588 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:35:30 compute-0 nova_compute[254898]: 2025-11-29 05:35:30.662 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:35:30 compute-0 nova_compute[254898]: 2025-11-29 05:35:30.663 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:35:30 compute-0 nova_compute[254898]: 2025-11-29 05:35:30.679 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:35:31 compute-0 podman[262338]: 2025-11-29 05:35:31.028003911 +0000 UTC m=+0.065380233 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 05:35:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:35:31 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1136413950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:35:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "format": "json"}]: dispatch
Nov 29 05:35:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "08ff9271-7e61-496e-a296-8bfe1694c401", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72_33f581df-12c0-4550-9aaa-8aaaa30382e3", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "cbba1f8e-b6b8-4e8f-8d36-18abb0c9ac72", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:31 compute-0 ceph-mon[75176]: pgmap v922: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 41 KiB/s wr, 8 op/s
Nov 29 05:35:31 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/4244575321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:35:31 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1136413950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:35:31 compute-0 nova_compute[254898]: 2025-11-29 05:35:31.074 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:35:31 compute-0 nova_compute[254898]: 2025-11-29 05:35:31.080 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:35:31 compute-0 nova_compute[254898]: 2025-11-29 05:35:31.099 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:35:31 compute-0 nova_compute[254898]: 2025-11-29 05:35:31.101 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:35:31 compute-0 nova_compute[254898]: 2025-11-29 05:35:31.102 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:35:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 23 KiB/s wr, 5 op/s
Nov 29 05:35:32 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:35:32.008 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:35:32 compute-0 nova_compute[254898]: 2025-11-29 05:35:32.103 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:35:32 compute-0 nova_compute[254898]: 2025-11-29 05:35:32.103 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:35:32 compute-0 nova_compute[254898]: 2025-11-29 05:35:32.104 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:35:32 compute-0 nova_compute[254898]: 2025-11-29 05:35:32.104 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:35:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "snap_name": "8cf4c283-b297-4eb5-902b-efa757c8775f_1d37acde-d071-4205-8f1d-460ac18d4e24", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8cf4c283-b297-4eb5-902b-efa757c8775f_1d37acde-d071-4205-8f1d-460ac18d4e24, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta.tmp'
Nov 29 05:35:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta.tmp' to config b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta'
Nov 29 05:35:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8cf4c283-b297-4eb5-902b-efa757c8775f_1d37acde-d071-4205-8f1d-460ac18d4e24, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "snap_name": "8cf4c283-b297-4eb5-902b-efa757c8775f", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8cf4c283-b297-4eb5-902b-efa757c8775f, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta.tmp'
Nov 29 05:35:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta.tmp' to config b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1/.meta'
Nov 29 05:35:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8cf4c283-b297-4eb5-902b-efa757c8775f, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 29 05:35:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 29 05:35:33 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 29 05:35:33 compute-0 ceph-mon[75176]: pgmap v923: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 23 KiB/s wr, 5 op/s
Nov 29 05:35:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "466ffbf3-2e61-457d-9a83-066aa5755061_99b797bd-23c6-46a2-8353-545c9d1d20ad", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:466ffbf3-2e61-457d-9a83-066aa5755061_99b797bd-23c6-46a2-8353-545c9d1d20ad, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:35:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:35:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:466ffbf3-2e61-457d-9a83-066aa5755061_99b797bd-23c6-46a2-8353-545c9d1d20ad, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "466ffbf3-2e61-457d-9a83-066aa5755061", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:466ffbf3-2e61-457d-9a83-066aa5755061, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:35:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:35:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:466ffbf3-2e61-457d-9a83-066aa5755061, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:33 compute-0 nova_compute[254898]: 2025-11-29 05:35:33.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:35:33 compute-0 nova_compute[254898]: 2025-11-29 05:35:33.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:35:33 compute-0 nova_compute[254898]: 2025-11-29 05:35:33.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:35:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 27 KiB/s wr, 6 op/s
Nov 29 05:35:34 compute-0 nova_compute[254898]: 2025-11-29 05:35:34.033 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:35:34 compute-0 nova_compute[254898]: 2025-11-29 05:35:34.033 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:35:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 29 05:35:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 29 05:35:34 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 29 05:35:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "snap_name": "8cf4c283-b297-4eb5-902b-efa757c8775f_1d37acde-d071-4205-8f1d-460ac18d4e24", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "snap_name": "8cf4c283-b297-4eb5-902b-efa757c8775f", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:34 compute-0 ceph-mon[75176]: osdmap e128: 3 total, 3 up, 3 in
Nov 29 05:35:34 compute-0 ceph-mon[75176]: osdmap e129: 3 total, 3 up, 3 in
Nov 29 05:35:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 29 05:35:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 29 05:35:35 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 29 05:35:35 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "466ffbf3-2e61-457d-9a83-066aa5755061_99b797bd-23c6-46a2-8353-545c9d1d20ad", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:35 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "466ffbf3-2e61-457d-9a83-066aa5755061", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:35 compute-0 ceph-mon[75176]: pgmap v925: 305 pgs: 305 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 27 KiB/s wr, 6 op/s
Nov 29 05:35:35 compute-0 ceph-mon[75176]: osdmap e130: 3 total, 3 up, 3 in
Nov 29 05:35:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:35:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 38 KiB/s wr, 7 op/s
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "format": "json"}]: dispatch
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cfa53e30-aa5d-48db-9775-60686b039ee1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cfa53e30-aa5d-48db-9775-60686b039ee1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:36 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:36.385+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cfa53e30-aa5d-48db-9775-60686b039ee1' of type subvolume
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cfa53e30-aa5d-48db-9775-60686b039ee1' of type subvolume
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cfa53e30-aa5d-48db-9775-60686b039ee1'' moved to trashcan
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cfa53e30-aa5d-48db-9775-60686b039ee1, vol_name:cephfs) < ""
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "5fbba3e6-4a9f-416a-b875-dbe87783ac9f_7af15242-6172-4c61-ab43-69a053d2a208", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5fbba3e6-4a9f-416a-b875-dbe87783ac9f_7af15242-6172-4c61-ab43-69a053d2a208, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5fbba3e6-4a9f-416a-b875-dbe87783ac9f_7af15242-6172-4c61-ab43-69a053d2a208, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "5fbba3e6-4a9f-416a-b875-dbe87783ac9f", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5fbba3e6-4a9f-416a-b875-dbe87783ac9f, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:35:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5fbba3e6-4a9f-416a-b875-dbe87783ac9f, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:37 compute-0 ceph-mon[75176]: pgmap v928: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 38 KiB/s wr, 7 op/s
Nov 29 05:35:37 compute-0 sudo[262359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:35:37 compute-0 sudo[262359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:37 compute-0 sudo[262359]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:37 compute-0 sudo[262384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:35:37 compute-0 sudo[262384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:37 compute-0 sudo[262384]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:37 compute-0 sudo[262409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:35:37 compute-0 sudo[262409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:37 compute-0 sudo[262409]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:37 compute-0 sudo[262434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:35:37 compute-0 sudo[262434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:37 compute-0 sudo[262434]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:35:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:35:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:35:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:35:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:35:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:35:37 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 329d6dc2-3a5e-4918-b654-2fe81c2938a3 does not exist
Nov 29 05:35:37 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev cd50bc0b-3cbb-4e41-a2ee-330925ce638b does not exist
Nov 29 05:35:37 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev f1f143b1-432a-461f-8cb8-31a1618de53c does not exist
Nov 29 05:35:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:35:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:35:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:35:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:35:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:35:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:35:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 38 KiB/s wr, 7 op/s
Nov 29 05:35:37 compute-0 sudo[262491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:35:37 compute-0 sudo[262491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:37 compute-0 sudo[262491]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:38 compute-0 sudo[262516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:35:38 compute-0 sudo[262516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:38 compute-0 sudo[262516]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 29 05:35:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "format": "json"}]: dispatch
Nov 29 05:35:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cfa53e30-aa5d-48db-9775-60686b039ee1", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "5fbba3e6-4a9f-416a-b875-dbe87783ac9f_7af15242-6172-4c61-ab43-69a053d2a208", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "5fbba3e6-4a9f-416a-b875-dbe87783ac9f", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:35:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:35:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:35:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:35:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:35:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:35:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 29 05:35:38 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 29 05:35:38 compute-0 sudo[262541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:35:38 compute-0 sudo[262541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:38 compute-0 sudo[262541]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:38 compute-0 sudo[262566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:35:38 compute-0 sudo[262566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:38 compute-0 podman[262632]: 2025-11-29 05:35:38.495117465 +0000 UTC m=+0.036345980 container create 5120da209b89ba8c399703f298b7dc403648abc5edd0af0f848161431822bf56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:35:38 compute-0 systemd[1]: Started libpod-conmon-5120da209b89ba8c399703f298b7dc403648abc5edd0af0f848161431822bf56.scope.
Nov 29 05:35:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:35:38 compute-0 podman[262632]: 2025-11-29 05:35:38.479228871 +0000 UTC m=+0.020457416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:35:38 compute-0 podman[262632]: 2025-11-29 05:35:38.579129327 +0000 UTC m=+0.120357882 container init 5120da209b89ba8c399703f298b7dc403648abc5edd0af0f848161431822bf56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:35:38 compute-0 podman[262632]: 2025-11-29 05:35:38.586901576 +0000 UTC m=+0.128130081 container start 5120da209b89ba8c399703f298b7dc403648abc5edd0af0f848161431822bf56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:35:38 compute-0 podman[262632]: 2025-11-29 05:35:38.590756319 +0000 UTC m=+0.131984874 container attach 5120da209b89ba8c399703f298b7dc403648abc5edd0af0f848161431822bf56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:35:38 compute-0 gifted_keller[262648]: 167 167
Nov 29 05:35:38 compute-0 systemd[1]: libpod-5120da209b89ba8c399703f298b7dc403648abc5edd0af0f848161431822bf56.scope: Deactivated successfully.
Nov 29 05:35:38 compute-0 conmon[262648]: conmon 5120da209b89ba8c3997 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5120da209b89ba8c399703f298b7dc403648abc5edd0af0f848161431822bf56.scope/container/memory.events
Nov 29 05:35:38 compute-0 podman[262632]: 2025-11-29 05:35:38.593402463 +0000 UTC m=+0.134631008 container died 5120da209b89ba8c399703f298b7dc403648abc5edd0af0f848161431822bf56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:35:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-73904d3503ae104c443bf9edb83f1938e2f742be3d11d044c8a3015e7d25d17f-merged.mount: Deactivated successfully.
Nov 29 05:35:38 compute-0 podman[262632]: 2025-11-29 05:35:38.636358832 +0000 UTC m=+0.177587387 container remove 5120da209b89ba8c399703f298b7dc403648abc5edd0af0f848161431822bf56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:35:38 compute-0 systemd[1]: libpod-conmon-5120da209b89ba8c399703f298b7dc403648abc5edd0af0f848161431822bf56.scope: Deactivated successfully.
Nov 29 05:35:38 compute-0 podman[262672]: 2025-11-29 05:35:38.822915554 +0000 UTC m=+0.042815076 container create b8e465d873e3dba624a3651980952516eb5bcb9caa9ab623cc791d80497f7737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:35:38 compute-0 systemd[1]: Started libpod-conmon-b8e465d873e3dba624a3651980952516eb5bcb9caa9ab623cc791d80497f7737.scope.
Nov 29 05:35:38 compute-0 podman[262672]: 2025-11-29 05:35:38.803010983 +0000 UTC m=+0.022910475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:35:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dd177a43e2ca99c5d8045c71ac32f6dabeb79a7ff0f88f9448cab98ab18248/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dd177a43e2ca99c5d8045c71ac32f6dabeb79a7ff0f88f9448cab98ab18248/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dd177a43e2ca99c5d8045c71ac32f6dabeb79a7ff0f88f9448cab98ab18248/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dd177a43e2ca99c5d8045c71ac32f6dabeb79a7ff0f88f9448cab98ab18248/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dd177a43e2ca99c5d8045c71ac32f6dabeb79a7ff0f88f9448cab98ab18248/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:38 compute-0 podman[262672]: 2025-11-29 05:35:38.92112546 +0000 UTC m=+0.141024942 container init b8e465d873e3dba624a3651980952516eb5bcb9caa9ab623cc791d80497f7737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:35:38 compute-0 podman[262672]: 2025-11-29 05:35:38.932041994 +0000 UTC m=+0.151941486 container start b8e465d873e3dba624a3651980952516eb5bcb9caa9ab623cc791d80497f7737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:35:38 compute-0 podman[262672]: 2025-11-29 05:35:38.936274226 +0000 UTC m=+0.156173728 container attach b8e465d873e3dba624a3651980952516eb5bcb9caa9ab623cc791d80497f7737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:35:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 29 05:35:39 compute-0 ceph-mon[75176]: pgmap v929: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 42 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 38 KiB/s wr, 7 op/s
Nov 29 05:35:39 compute-0 ceph-mon[75176]: osdmap e131: 3 total, 3 up, 3 in
Nov 29 05:35:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 29 05:35:39 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 29 05:35:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 68 KiB/s wr, 12 op/s
Nov 29 05:35:39 compute-0 jovial_ganguly[262689]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:35:39 compute-0 jovial_ganguly[262689]: --> relative data size: 1.0
Nov 29 05:35:39 compute-0 jovial_ganguly[262689]: --> All data devices are unavailable
Nov 29 05:35:40 compute-0 systemd[1]: libpod-b8e465d873e3dba624a3651980952516eb5bcb9caa9ab623cc791d80497f7737.scope: Deactivated successfully.
Nov 29 05:35:40 compute-0 podman[262672]: 2025-11-29 05:35:40.012194941 +0000 UTC m=+1.232094433 container died b8e465d873e3dba624a3651980952516eb5bcb9caa9ab623cc791d80497f7737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:35:40 compute-0 systemd[1]: libpod-b8e465d873e3dba624a3651980952516eb5bcb9caa9ab623cc791d80497f7737.scope: Consumed 1.011s CPU time.
Nov 29 05:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-65dd177a43e2ca99c5d8045c71ac32f6dabeb79a7ff0f88f9448cab98ab18248-merged.mount: Deactivated successfully.
Nov 29 05:35:40 compute-0 podman[262672]: 2025-11-29 05:35:40.078009843 +0000 UTC m=+1.297909315 container remove b8e465d873e3dba624a3651980952516eb5bcb9caa9ab623cc791d80497f7737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:35:40 compute-0 systemd[1]: libpod-conmon-b8e465d873e3dba624a3651980952516eb5bcb9caa9ab623cc791d80497f7737.scope: Deactivated successfully.
Nov 29 05:35:40 compute-0 sudo[262566]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:40 compute-0 ceph-mon[75176]: osdmap e132: 3 total, 3 up, 3 in
Nov 29 05:35:40 compute-0 sudo[262729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:35:40 compute-0 sudo[262729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:40 compute-0 sudo[262729]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:35:40 compute-0 sudo[262754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:35:40 compute-0 sudo[262754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:40 compute-0 sudo[262754]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:40 compute-0 sudo[262779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:35:40 compute-0 sudo[262779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:40 compute-0 sudo[262779]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:40 compute-0 sudo[262804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:35:40 compute-0 sudo[262804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "48601f02-9051-4603-a049-8748d3e87534", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48601f02-9051-4603-a049-8748d3e87534, vol_name:cephfs) < ""
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/48601f02-9051-4603-a049-8748d3e87534/.meta.tmp'
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/48601f02-9051-4603-a049-8748d3e87534/.meta.tmp' to config b'/volumes/_nogroup/48601f02-9051-4603-a049-8748d3e87534/.meta'
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:48601f02-9051-4603-a049-8748d3e87534, vol_name:cephfs) < ""
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "48601f02-9051-4603-a049-8748d3e87534", "format": "json"}]: dispatch
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48601f02-9051-4603-a049-8748d3e87534, vol_name:cephfs) < ""
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:48601f02-9051-4603-a049-8748d3e87534, vol_name:cephfs) < ""
Nov 29 05:35:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:35:40 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "aac92040-d3b9-4ca6-8113-38a011a7589d_c98a0dcd-284d-49d3-9ea2-a80ce4ec6e6e", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aac92040-d3b9-4ca6-8113-38a011a7589d_c98a0dcd-284d-49d3-9ea2-a80ce4ec6e6e, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aac92040-d3b9-4ca6-8113-38a011a7589d_c98a0dcd-284d-49d3-9ea2-a80ce4ec6e6e, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "aac92040-d3b9-4ca6-8113-38a011a7589d", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aac92040-d3b9-4ca6-8113-38a011a7589d, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:40 compute-0 podman[262868]: 2025-11-29 05:35:40.714799526 +0000 UTC m=+0.045577034 container create 3b75848cac1520c2e3228b52ad256315155abf2d216e8e11274fc156ba35ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp'
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta.tmp' to config b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031/.meta'
Nov 29 05:35:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aac92040-d3b9-4ca6-8113-38a011a7589d, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:40 compute-0 systemd[1]: Started libpod-conmon-3b75848cac1520c2e3228b52ad256315155abf2d216e8e11274fc156ba35ea5c.scope.
Nov 29 05:35:40 compute-0 podman[262868]: 2025-11-29 05:35:40.692822174 +0000 UTC m=+0.023599682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:35:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:35:40 compute-0 podman[262868]: 2025-11-29 05:35:40.823000683 +0000 UTC m=+0.153778181 container init 3b75848cac1520c2e3228b52ad256315155abf2d216e8e11274fc156ba35ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_benz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:35:40 compute-0 podman[262868]: 2025-11-29 05:35:40.829251954 +0000 UTC m=+0.160029452 container start 3b75848cac1520c2e3228b52ad256315155abf2d216e8e11274fc156ba35ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_benz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:35:40 compute-0 podman[262868]: 2025-11-29 05:35:40.832828861 +0000 UTC m=+0.163606339 container attach 3b75848cac1520c2e3228b52ad256315155abf2d216e8e11274fc156ba35ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_benz, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 05:35:40 compute-0 systemd[1]: libpod-3b75848cac1520c2e3228b52ad256315155abf2d216e8e11274fc156ba35ea5c.scope: Deactivated successfully.
Nov 29 05:35:40 compute-0 great_benz[262884]: 167 167
Nov 29 05:35:40 compute-0 conmon[262884]: conmon 3b75848cac1520c2e322 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b75848cac1520c2e3228b52ad256315155abf2d216e8e11274fc156ba35ea5c.scope/container/memory.events
Nov 29 05:35:40 compute-0 podman[262868]: 2025-11-29 05:35:40.835920705 +0000 UTC m=+0.166698193 container died 3b75848cac1520c2e3228b52ad256315155abf2d216e8e11274fc156ba35ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:35:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-63e22a13b7eff65e1a8f299f2dbb55cff01cd7873cdcafb4dd04de236f71769e-merged.mount: Deactivated successfully.
Nov 29 05:35:40 compute-0 podman[262868]: 2025-11-29 05:35:40.864450385 +0000 UTC m=+0.195227853 container remove 3b75848cac1520c2e3228b52ad256315155abf2d216e8e11274fc156ba35ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 05:35:40 compute-0 systemd[1]: libpod-conmon-3b75848cac1520c2e3228b52ad256315155abf2d216e8e11274fc156ba35ea5c.scope: Deactivated successfully.
Nov 29 05:35:41 compute-0 podman[262908]: 2025-11-29 05:35:41.017456747 +0000 UTC m=+0.040028030 container create 24ace2c8111cf2214e51be92b8ee686fd16b74103cb9be534128ac7413d9bafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:35:41 compute-0 systemd[1]: Started libpod-conmon-24ace2c8111cf2214e51be92b8ee686fd16b74103cb9be534128ac7413d9bafd.scope.
Nov 29 05:35:41 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288cb133c1ed145613e9db5d4503f17a4114bfdd8340ecb1d7dd41d2bfbc2915/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288cb133c1ed145613e9db5d4503f17a4114bfdd8340ecb1d7dd41d2bfbc2915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288cb133c1ed145613e9db5d4503f17a4114bfdd8340ecb1d7dd41d2bfbc2915/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/288cb133c1ed145613e9db5d4503f17a4114bfdd8340ecb1d7dd41d2bfbc2915/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:41 compute-0 podman[262908]: 2025-11-29 05:35:41.001342376 +0000 UTC m=+0.023913689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:35:41 compute-0 podman[262908]: 2025-11-29 05:35:41.10607564 +0000 UTC m=+0.128646973 container init 24ace2c8111cf2214e51be92b8ee686fd16b74103cb9be534128ac7413d9bafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:35:41 compute-0 podman[262908]: 2025-11-29 05:35:41.112773322 +0000 UTC m=+0.135344615 container start 24ace2c8111cf2214e51be92b8ee686fd16b74103cb9be534128ac7413d9bafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 29 05:35:41 compute-0 podman[262908]: 2025-11-29 05:35:41.116896562 +0000 UTC m=+0.139467855 container attach 24ace2c8111cf2214e51be92b8ee686fd16b74103cb9be534128ac7413d9bafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:35:41 compute-0 ceph-mon[75176]: pgmap v932: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 68 KiB/s wr, 12 op/s
Nov 29 05:35:41 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:35:41
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'backups', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images']
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]: {
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:     "0": [
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:         {
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "devices": [
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "/dev/loop3"
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             ],
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_name": "ceph_lv0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_size": "21470642176",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "name": "ceph_lv0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "tags": {
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.cluster_name": "ceph",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.crush_device_class": "",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.encrypted": "0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.osd_id": "0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.type": "block",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.vdo": "0"
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             },
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "type": "block",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "vg_name": "ceph_vg0"
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:         }
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:     ],
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:     "1": [
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:         {
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "devices": [
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "/dev/loop4"
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             ],
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_name": "ceph_lv1",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_size": "21470642176",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "name": "ceph_lv1",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "tags": {
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.cluster_name": "ceph",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.crush_device_class": "",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.encrypted": "0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.osd_id": "1",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.type": "block",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.vdo": "0"
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             },
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "type": "block",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "vg_name": "ceph_vg1"
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:         }
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:     ],
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:     "2": [
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:         {
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "devices": [
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "/dev/loop5"
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             ],
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_name": "ceph_lv2",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_size": "21470642176",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "name": "ceph_lv2",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "tags": {
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.cluster_name": "ceph",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.crush_device_class": "",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.encrypted": "0",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.osd_id": "2",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.type": "block",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:                 "ceph.vdo": "0"
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             },
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "type": "block",
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:             "vg_name": "ceph_vg2"
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:         }
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]:     ]
Nov 29 05:35:41 compute-0 stupefied_torvalds[262924]: }
Nov 29 05:35:41 compute-0 systemd[1]: libpod-24ace2c8111cf2214e51be92b8ee686fd16b74103cb9be534128ac7413d9bafd.scope: Deactivated successfully.
Nov 29 05:35:41 compute-0 podman[262933]: 2025-11-29 05:35:41.900137267 +0000 UTC m=+0.021583013 container died 24ace2c8111cf2214e51be92b8ee686fd16b74103cb9be534128ac7413d9bafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 05:35:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-288cb133c1ed145613e9db5d4503f17a4114bfdd8340ecb1d7dd41d2bfbc2915-merged.mount: Deactivated successfully.
Nov 29 05:35:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 891 B/s rd, 26 KiB/s wr, 5 op/s
Nov 29 05:35:41 compute-0 podman[262933]: 2025-11-29 05:35:41.983612716 +0000 UTC m=+0.105058442 container remove 24ace2c8111cf2214e51be92b8ee686fd16b74103cb9be534128ac7413d9bafd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_torvalds, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 05:35:41 compute-0 systemd[1]: libpod-conmon-24ace2c8111cf2214e51be92b8ee686fd16b74103cb9be534128ac7413d9bafd.scope: Deactivated successfully.
Nov 29 05:35:42 compute-0 sudo[262804]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:42 compute-0 sudo[262948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:35:42 compute-0 sudo[262948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:42 compute-0 sudo[262948]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:42 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "48601f02-9051-4603-a049-8748d3e87534", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:42 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "48601f02-9051-4603-a049-8748d3e87534", "format": "json"}]: dispatch
Nov 29 05:35:42 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "aac92040-d3b9-4ca6-8113-38a011a7589d_c98a0dcd-284d-49d3-9ea2-a80ce4ec6e6e", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:42 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "snap_name": "aac92040-d3b9-4ca6-8113-38a011a7589d", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:42 compute-0 sudo[262973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:35:42 compute-0 sudo[262973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:42 compute-0 sudo[262973]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:42 compute-0 sudo[262998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:35:42 compute-0 sudo[262998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:42 compute-0 sudo[262998]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:42 compute-0 sudo[263023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:35:42 compute-0 sudo[263023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:42 compute-0 podman[263087]: 2025-11-29 05:35:42.630849081 +0000 UTC m=+0.039221309 container create 28030d1989e5edc123e498a4513d4c82604fde90e1d9b2d80c2cfe1dcd2acb5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:35:42 compute-0 systemd[1]: Started libpod-conmon-28030d1989e5edc123e498a4513d4c82604fde90e1d9b2d80c2cfe1dcd2acb5e.scope.
Nov 29 05:35:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:35:42 compute-0 podman[263087]: 2025-11-29 05:35:42.704679287 +0000 UTC m=+0.113051525 container init 28030d1989e5edc123e498a4513d4c82604fde90e1d9b2d80c2cfe1dcd2acb5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 05:35:42 compute-0 podman[263087]: 2025-11-29 05:35:42.61259908 +0000 UTC m=+0.020971318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:35:42 compute-0 podman[263087]: 2025-11-29 05:35:42.710288843 +0000 UTC m=+0.118661061 container start 28030d1989e5edc123e498a4513d4c82604fde90e1d9b2d80c2cfe1dcd2acb5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:35:42 compute-0 podman[263087]: 2025-11-29 05:35:42.712833635 +0000 UTC m=+0.121205863 container attach 28030d1989e5edc123e498a4513d4c82604fde90e1d9b2d80c2cfe1dcd2acb5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:35:42 compute-0 romantic_sanderson[263103]: 167 167
Nov 29 05:35:42 compute-0 systemd[1]: libpod-28030d1989e5edc123e498a4513d4c82604fde90e1d9b2d80c2cfe1dcd2acb5e.scope: Deactivated successfully.
Nov 29 05:35:42 compute-0 podman[263087]: 2025-11-29 05:35:42.715017127 +0000 UTC m=+0.123389335 container died 28030d1989e5edc123e498a4513d4c82604fde90e1d9b2d80c2cfe1dcd2acb5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:35:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-72e27935fc965bd447b76b7e322667aad6003e68877b86f5532faf8b91e67708-merged.mount: Deactivated successfully.
Nov 29 05:35:42 compute-0 podman[263087]: 2025-11-29 05:35:42.743309931 +0000 UTC m=+0.151682169 container remove 28030d1989e5edc123e498a4513d4c82604fde90e1d9b2d80c2cfe1dcd2acb5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_sanderson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:35:42 compute-0 systemd[1]: libpod-conmon-28030d1989e5edc123e498a4513d4c82604fde90e1d9b2d80c2cfe1dcd2acb5e.scope: Deactivated successfully.
Nov 29 05:35:42 compute-0 podman[263127]: 2025-11-29 05:35:42.927235551 +0000 UTC m=+0.058554958 container create bff80449f8b78d02e448f518ae8da99bcb8e59595ace346a1641e1a60df927f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 29 05:35:42 compute-0 systemd[1]: Started libpod-conmon-bff80449f8b78d02e448f518ae8da99bcb8e59595ace346a1641e1a60df927f0.scope.
Nov 29 05:35:42 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:35:42 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:35:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ccde109962c195ed1496ea51edc0f5a1b563a0088622194bdb9ed01e2d2d4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ccde109962c195ed1496ea51edc0f5a1b563a0088622194bdb9ed01e2d2d4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ccde109962c195ed1496ea51edc0f5a1b563a0088622194bdb9ed01e2d2d4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ccde109962c195ed1496ea51edc0f5a1b563a0088622194bdb9ed01e2d2d4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:35:42 compute-0 podman[263127]: 2025-11-29 05:35:42.99916408 +0000 UTC m=+0.130483477 container init bff80449f8b78d02e448f518ae8da99bcb8e59595ace346a1641e1a60df927f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:35:43 compute-0 podman[263127]: 2025-11-29 05:35:42.906579101 +0000 UTC m=+0.037898518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:35:43 compute-0 podman[263127]: 2025-11-29 05:35:43.007121063 +0000 UTC m=+0.138440480 container start bff80449f8b78d02e448f518ae8da99bcb8e59595ace346a1641e1a60df927f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:35:43 compute-0 podman[263127]: 2025-11-29 05:35:43.012874622 +0000 UTC m=+0.144194019 container attach bff80449f8b78d02e448f518ae8da99bcb8e59595ace346a1641e1a60df927f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:35:43 compute-0 ceph-mon[75176]: pgmap v933: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 891 B/s rd, 26 KiB/s wr, 5 op/s
Nov 29 05:35:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 4 op/s
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8f9a66be-c929-4519-9b84-590a248b55ea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8f9a66be-c929-4519-9b84-590a248b55ea, vol_name:cephfs) < ""
Nov 29 05:35:44 compute-0 youthful_almeida[263145]: {
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "osd_id": 0,
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "type": "bluestore"
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:     },
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "osd_id": 1,
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "type": "bluestore"
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:     },
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "osd_id": 2,
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:         "type": "bluestore"
Nov 29 05:35:44 compute-0 youthful_almeida[263145]:     }
Nov 29 05:35:44 compute-0 youthful_almeida[263145]: }
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8f9a66be-c929-4519-9b84-590a248b55ea/.meta.tmp'
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8f9a66be-c929-4519-9b84-590a248b55ea/.meta.tmp' to config b'/volumes/_nogroup/8f9a66be-c929-4519-9b84-590a248b55ea/.meta'
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8f9a66be-c929-4519-9b84-590a248b55ea, vol_name:cephfs) < ""
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8f9a66be-c929-4519-9b84-590a248b55ea", "format": "json"}]: dispatch
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8f9a66be-c929-4519-9b84-590a248b55ea, vol_name:cephfs) < ""
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8f9a66be-c929-4519-9b84-590a248b55ea, vol_name:cephfs) < ""
Nov 29 05:35:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:35:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:44 compute-0 systemd[1]: libpod-bff80449f8b78d02e448f518ae8da99bcb8e59595ace346a1641e1a60df927f0.scope: Deactivated successfully.
Nov 29 05:35:44 compute-0 podman[263127]: 2025-11-29 05:35:44.153879611 +0000 UTC m=+1.285198998 container died bff80449f8b78d02e448f518ae8da99bcb8e59595ace346a1641e1a60df927f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:35:44 compute-0 systemd[1]: libpod-bff80449f8b78d02e448f518ae8da99bcb8e59595ace346a1641e1a60df927f0.scope: Consumed 1.139s CPU time.
Nov 29 05:35:44 compute-0 ceph-mon[75176]: pgmap v934: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 22 KiB/s wr, 4 op/s
Nov 29 05:35:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8f9a66be-c929-4519-9b84-590a248b55ea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8f9a66be-c929-4519-9b84-590a248b55ea", "format": "json"}]: dispatch
Nov 29 05:35:44 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3ccde109962c195ed1496ea51edc0f5a1b563a0088622194bdb9ed01e2d2d4d-merged.mount: Deactivated successfully.
Nov 29 05:35:44 compute-0 podman[263127]: 2025-11-29 05:35:44.219653972 +0000 UTC m=+1.350973359 container remove bff80449f8b78d02e448f518ae8da99bcb8e59595ace346a1641e1a60df927f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:35:44 compute-0 systemd[1]: libpod-conmon-bff80449f8b78d02e448f518ae8da99bcb8e59595ace346a1641e1a60df927f0.scope: Deactivated successfully.
Nov 29 05:35:44 compute-0 sudo[263023]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:35:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:35:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:35:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 5b373fc2-7764-44ff-b984-c4208fde5194 does not exist
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 0f66a80a-9bcf-4657-a03b-d09051015eac does not exist
Nov 29 05:35:44 compute-0 sudo[263191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:35:44 compute-0 sudo[263191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:44 compute-0 sudo[263191]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "format": "json"}]: dispatch
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e887b8f7-1920-4aa9-a22b-586da6843031, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e887b8f7-1920-4aa9-a22b-586da6843031, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e887b8f7-1920-4aa9-a22b-586da6843031' of type subvolume
Nov 29 05:35:44 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:44.392+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e887b8f7-1920-4aa9-a22b-586da6843031' of type subvolume
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e887b8f7-1920-4aa9-a22b-586da6843031'' moved to trashcan
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:35:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e887b8f7-1920-4aa9-a22b-586da6843031, vol_name:cephfs) < ""
Nov 29 05:35:44 compute-0 sudo[263216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:35:44 compute-0 sudo[263216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:35:44 compute-0 sudo[263216]: pam_unix(sudo:session): session closed for user root
Nov 29 05:35:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 05:35:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 29 05:35:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 29 05:35:45 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 29 05:35:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:35:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:35:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "format": "json"}]: dispatch
Nov 29 05:35:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e887b8f7-1920-4aa9-a22b-586da6843031", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:45 compute-0 ceph-mon[75176]: osdmap e133: 3 total, 3 up, 3 in
Nov 29 05:35:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 45 KiB/s wr, 9 op/s
Nov 29 05:35:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 29 05:35:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 29 05:35:46 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 29 05:35:46 compute-0 ceph-mon[75176]: pgmap v936: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 45 KiB/s wr, 9 op/s
Nov 29 05:35:46 compute-0 ceph-mon[75176]: osdmap e134: 3 total, 3 up, 3 in
Nov 29 05:35:47 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 22 KiB/s wr, 5 op/s
Nov 29 05:35:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8f9a66be-c929-4519-9b84-590a248b55ea", "format": "json"}]: dispatch
Nov 29 05:35:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8f9a66be-c929-4519-9b84-590a248b55ea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8f9a66be-c929-4519-9b84-590a248b55ea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:48 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8f9a66be-c929-4519-9b84-590a248b55ea' of type subvolume
Nov 29 05:35:48 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:48.997+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8f9a66be-c929-4519-9b84-590a248b55ea' of type subvolume
Nov 29 05:35:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8f9a66be-c929-4519-9b84-590a248b55ea", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8f9a66be-c929-4519-9b84-590a248b55ea, vol_name:cephfs) < ""
Nov 29 05:35:49 compute-0 podman[263242]: 2025-11-29 05:35:49.002313246 +0000 UTC m=+0.057963573 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:35:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8f9a66be-c929-4519-9b84-590a248b55ea'' moved to trashcan
Nov 29 05:35:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:35:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8f9a66be-c929-4519-9b84-590a248b55ea, vol_name:cephfs) < ""
Nov 29 05:35:49 compute-0 ceph-mon[75176]: pgmap v938: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 22 KiB/s wr, 5 op/s
Nov 29 05:35:49 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 46 KiB/s wr, 6 op/s
Nov 29 05:35:50 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8f9a66be-c929-4519-9b84-590a248b55ea", "format": "json"}]: dispatch
Nov 29 05:35:50 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8f9a66be-c929-4519-9b84-590a248b55ea", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:35:51 compute-0 ceph-mon[75176]: pgmap v939: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 46 KiB/s wr, 6 op/s
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.613578091424106e-05 of space, bias 4.0, pg target 0.031362937097089275 quantized to 16 (current 16)
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:35:51 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 46 KiB/s wr, 6 op/s
Nov 29 05:35:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0e332c7f-e0d3-46ad-9a13-7cf0840fc484", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, vol_name:cephfs) < ""
Nov 29 05:35:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0e332c7f-e0d3-46ad-9a13-7cf0840fc484/.meta.tmp'
Nov 29 05:35:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0e332c7f-e0d3-46ad-9a13-7cf0840fc484/.meta.tmp' to config b'/volumes/_nogroup/0e332c7f-e0d3-46ad-9a13-7cf0840fc484/.meta'
Nov 29 05:35:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, vol_name:cephfs) < ""
Nov 29 05:35:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0e332c7f-e0d3-46ad-9a13-7cf0840fc484", "format": "json"}]: dispatch
Nov 29 05:35:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, vol_name:cephfs) < ""
Nov 29 05:35:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, vol_name:cephfs) < ""
Nov 29 05:35:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:35:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:53 compute-0 podman[263264]: 2025-11-29 05:35:53.055988587 +0000 UTC m=+0.108641140 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:35:53 compute-0 ceph-mon[75176]: pgmap v940: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 46 KiB/s wr, 6 op/s
Nov 29 05:35:53 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:53 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 233 B/s rd, 22 KiB/s wr, 2 op/s
Nov 29 05:35:54 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0e332c7f-e0d3-46ad-9a13-7cf0840fc484", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:54 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0e332c7f-e0d3-46ad-9a13-7cf0840fc484", "format": "json"}]: dispatch
Nov 29 05:35:54 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:54 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:35:54 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta.tmp'
Nov 29 05:35:54 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta.tmp' to config b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta'
Nov 29 05:35:54 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:35:54 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "format": "json"}]: dispatch
Nov 29 05:35:54 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:35:54 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:35:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:35:54 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "snap_name": "3ce803d6-55ed-4b2e-b9f1-fdf345652692", "format": "json"}]: dispatch
Nov 29 05:35:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3ce803d6-55ed-4b2e-b9f1-fdf345652692, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:35:55 compute-0 ceph-mon[75176]: pgmap v941: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 233 B/s rd, 22 KiB/s wr, 2 op/s
Nov 29 05:35:55 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3ce803d6-55ed-4b2e-b9f1-fdf345652692, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:35:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:35:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 29 05:35:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 29 05:35:55 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 05:35:55 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 419 B/s rd, 36 KiB/s wr, 4 op/s
Nov 29 05:35:56 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:56 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "format": "json"}]: dispatch
Nov 29 05:35:56 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "snap_name": "3ce803d6-55ed-4b2e-b9f1-fdf345652692", "format": "json"}]: dispatch
Nov 29 05:35:56 compute-0 ceph-mon[75176]: osdmap e135: 3 total, 3 up, 3 in
Nov 29 05:35:56 compute-0 ceph-mon[75176]: pgmap v943: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 419 B/s rd, 36 KiB/s wr, 4 op/s
Nov 29 05:35:57 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 35 KiB/s wr, 4 op/s
Nov 29 05:35:58 compute-0 rsyslogd[1003]: imjournal: 1309 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 05:35:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0e332c7f-e0d3-46ad-9a13-7cf0840fc484", "format": "json"}]: dispatch
Nov 29 05:35:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:35:58 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:35:58.829+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e332c7f-e0d3-46ad-9a13-7cf0840fc484' of type subvolume
Nov 29 05:35:58 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0e332c7f-e0d3-46ad-9a13-7cf0840fc484' of type subvolume
Nov 29 05:35:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0e332c7f-e0d3-46ad-9a13-7cf0840fc484", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, vol_name:cephfs) < ""
Nov 29 05:35:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0e332c7f-e0d3-46ad-9a13-7cf0840fc484'' moved to trashcan
Nov 29 05:35:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:35:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0e332c7f-e0d3-46ad-9a13-7cf0840fc484, vol_name:cephfs) < ""
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 05:35:59 compute-0 ceph-mon[75176]: pgmap v944: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 35 KiB/s wr, 4 op/s
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c8cd1abe-2662-4481-9c2f-01f70ea291ce/.meta.tmp'
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c8cd1abe-2662-4481-9c2f-01f70ea291ce/.meta.tmp' to config b'/volumes/_nogroup/c8cd1abe-2662-4481-9c2f-01f70ea291ce/.meta'
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "format": "json"}]: dispatch
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 05:35:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:35:59 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "snap_name": "3ce803d6-55ed-4b2e-b9f1-fdf345652692_f6acc0be-448f-4c63-b32e-428b3a708389", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ce803d6-55ed-4b2e-b9f1-fdf345652692_f6acc0be-448f-4c63-b32e-428b3a708389, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta.tmp'
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta.tmp' to config b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta'
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ce803d6-55ed-4b2e-b9f1-fdf345652692_f6acc0be-448f-4c63-b32e-428b3a708389, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "snap_name": "3ce803d6-55ed-4b2e-b9f1-fdf345652692", "force": true, "format": "json"}]: dispatch
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ce803d6-55ed-4b2e-b9f1-fdf345652692, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta.tmp'
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta.tmp' to config b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71/.meta'
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3ce803d6-55ed-4b2e-b9f1-fdf345652692, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:35:59 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Nov 29 05:36:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0e332c7f-e0d3-46ad-9a13-7cf0840fc484", "format": "json"}]: dispatch
Nov 29 05:36:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0e332c7f-e0d3-46ad-9a13-7cf0840fc484", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "format": "json"}]: dispatch
Nov 29 05:36:00 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "snap_name": "3ce803d6-55ed-4b2e-b9f1-fdf345652692_f6acc0be-448f-4c63-b32e-428b3a708389", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "snap_name": "3ce803d6-55ed-4b2e-b9f1-fdf345652692", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:01 compute-0 ceph-mon[75176]: pgmap v945: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Nov 29 05:36:01 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Nov 29 05:36:02 compute-0 podman[263292]: 2025-11-29 05:36:02.056343049 +0000 UTC m=+0.092423506 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb79334f-5107-432b-91aa-57c8d02f46a2/.meta.tmp'
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb79334f-5107-432b-91aa-57c8d02f46a2/.meta.tmp' to config b'/volumes/_nogroup/fb79334f-5107-432b-91aa-57c8d02f46a2/.meta'
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "format": "json"}]: dispatch
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 05:36:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:36:02 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "format": "json"}]: dispatch
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:02 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:02.910+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c8cd1abe-2662-4481-9c2f-01f70ea291ce' of type subvolume
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c8cd1abe-2662-4481-9c2f-01f70ea291ce' of type subvolume
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c8cd1abe-2662-4481-9c2f-01f70ea291ce'' moved to trashcan
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c8cd1abe-2662-4481-9c2f-01f70ea291ce, vol_name:cephfs) < ""
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "format": "json"}]: dispatch
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:02 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:02.959+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09990eae-c6d2-4985-ad1a-d7539b5b0a71' of type subvolume
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09990eae-c6d2-4985-ad1a-d7539b5b0a71' of type subvolume
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/09990eae-c6d2-4985-ad1a-d7539b5b0a71'' moved to trashcan
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:36:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09990eae-c6d2-4985-ad1a-d7539b5b0a71, vol_name:cephfs) < ""
Nov 29 05:36:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 29 05:36:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 05:36:03 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 29 05:36:03 compute-0 ceph-mon[75176]: pgmap v946: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 34 KiB/s wr, 4 op/s
Nov 29 05:36:03 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:03 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 2 op/s
Nov 29 05:36:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "format": "json"}]: dispatch
Nov 29 05:36:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "format": "json"}]: dispatch
Nov 29 05:36:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c8cd1abe-2662-4481-9c2f-01f70ea291ce", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "format": "json"}]: dispatch
Nov 29 05:36:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "09990eae-c6d2-4985-ad1a-d7539b5b0a71", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:04 compute-0 ceph-mon[75176]: osdmap e136: 3 total, 3 up, 3 in
Nov 29 05:36:05 compute-0 ceph-mon[75176]: pgmap v948: 305 pgs: 305 active+clean; 43 MiB data, 213 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s wr, 2 op/s
Nov 29 05:36:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:05 compute-0 sshd-session[263312]: Invalid user test from 45.120.216.232 port 36566
Nov 29 05:36:05 compute-0 sshd-session[263312]: Received disconnect from 45.120.216.232 port 36566:11: Bye Bye [preauth]
Nov 29 05:36:05 compute-0 sshd-session[263312]: Disconnected from invalid user test 45.120.216.232 port 36566 [preauth]
Nov 29 05:36:05 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 49 KiB/s wr, 5 op/s
Nov 29 05:36:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "format": "json"}]: dispatch
Nov 29 05:36:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fb79334f-5107-432b-91aa-57c8d02f46a2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fb79334f-5107-432b-91aa-57c8d02f46a2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:06 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:06.112+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb79334f-5107-432b-91aa-57c8d02f46a2' of type subvolume
Nov 29 05:36:06 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb79334f-5107-432b-91aa-57c8d02f46a2' of type subvolume
Nov 29 05:36:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 05:36:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fb79334f-5107-432b-91aa-57c8d02f46a2'' moved to trashcan
Nov 29 05:36:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:36:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb79334f-5107-432b-91aa-57c8d02f46a2, vol_name:cephfs) < ""
Nov 29 05:36:07 compute-0 ceph-mon[75176]: pgmap v949: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 49 KiB/s wr, 5 op/s
Nov 29 05:36:07 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "format": "json"}]: dispatch
Nov 29 05:36:07 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb79334f-5107-432b-91aa-57c8d02f46a2", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:07 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 49 KiB/s wr, 5 op/s
Nov 29 05:36:09 compute-0 ceph-mon[75176]: pgmap v950: 305 pgs: 305 active+clean; 43 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 49 KiB/s wr, 5 op/s
Nov 29 05:36:09 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 05:36:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/969d69e3-5179-4284-9d56-4ddf6b5b95ef/.meta.tmp'
Nov 29 05:36:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/969d69e3-5179-4284-9d56-4ddf6b5b95ef/.meta.tmp' to config b'/volumes/_nogroup/969d69e3-5179-4284-9d56-4ddf6b5b95ef/.meta'
Nov 29 05:36:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 05:36:09 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "format": "json"}]: dispatch
Nov 29 05:36:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 05:36:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 05:36:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:36:09 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:09 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 05:36:10 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 29 05:36:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 05:36:10 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 29 05:36:11 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:11 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "format": "json"}]: dispatch
Nov 29 05:36:11 compute-0 ceph-mon[75176]: pgmap v951: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 05:36:11 compute-0 ceph-mon[75176]: osdmap e137: 3 total, 3 up, 3 in
Nov 29 05:36:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:36:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:36:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:36:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:36:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:36:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:36:11 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 689 B/s rd, 50 KiB/s wr, 6 op/s
Nov 29 05:36:13 compute-0 ceph-mon[75176]: pgmap v953: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 689 B/s rd, 50 KiB/s wr, 6 op/s
Nov 29 05:36:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:36:13.751 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:36:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:36:13.751 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:36:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:36:13.751 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:36:13 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 05:36:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:36:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2236799450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:36:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:36:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2236799450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:36:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "format": "json"}]: dispatch
Nov 29 05:36:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:14 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:14.840+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '969d69e3-5179-4284-9d56-4ddf6b5b95ef' of type subvolume
Nov 29 05:36:14 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '969d69e3-5179-4284-9d56-4ddf6b5b95ef' of type subvolume
Nov 29 05:36:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 05:36:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/969d69e3-5179-4284-9d56-4ddf6b5b95ef'' moved to trashcan
Nov 29 05:36:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:36:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:969d69e3-5179-4284-9d56-4ddf6b5b95ef, vol_name:cephfs) < ""
Nov 29 05:36:15 compute-0 ceph-mon[75176]: pgmap v954: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 05:36:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/2236799450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:36:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/2236799450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:36:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:15 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 05:36:16 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "format": "json"}]: dispatch
Nov 29 05:36:16 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "969d69e3-5179-4284-9d56-4ddf6b5b95ef", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:17 compute-0 ceph-mon[75176]: pgmap v955: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 05:36:17 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 05:36:19 compute-0 ceph-mon[75176]: pgmap v956: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 05:36:19 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 2 op/s
Nov 29 05:36:20 compute-0 podman[263315]: 2025-11-29 05:36:20.028491002 +0000 UTC m=+0.078114151 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 05:36:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:20 compute-0 ceph-mon[75176]: pgmap v957: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 19 KiB/s wr, 2 op/s
Nov 29 05:36:20 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:20 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 05:36:20 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9f9bf0ea-9f71-4161-881f-1c5e81eea943/.meta.tmp'
Nov 29 05:36:20 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9f9bf0ea-9f71-4161-881f-1c5e81eea943/.meta.tmp' to config b'/volumes/_nogroup/9f9bf0ea-9f71-4161-881f-1c5e81eea943/.meta'
Nov 29 05:36:20 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 05:36:20 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "format": "json"}]: dispatch
Nov 29 05:36:20 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 05:36:20 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 05:36:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:36:20 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:21 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:21 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "format": "json"}]: dispatch
Nov 29 05:36:21 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:21 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 174 B/s rd, 16 KiB/s wr, 2 op/s
Nov 29 05:36:22 compute-0 ceph-mon[75176]: pgmap v958: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 174 B/s rd, 16 KiB/s wr, 2 op/s
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "format": "json"}]: dispatch
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:23 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:23.322+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f9bf0ea-9f71-4161-881f-1c5e81eea943' of type subvolume
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f9bf0ea-9f71-4161-881f-1c5e81eea943' of type subvolume
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9f9bf0ea-9f71-4161-881f-1c5e81eea943'' moved to trashcan
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f9bf0ea-9f71-4161-881f-1c5e81eea943, vol_name:cephfs) < ""
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/70f1f9a5-b960-4859-afd1-e8403dcbe455/.meta.tmp'
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70f1f9a5-b960-4859-afd1-e8403dcbe455/.meta.tmp' to config b'/volumes/_nogroup/70f1f9a5-b960-4859-afd1-e8403dcbe455/.meta'
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "format": "json"}]: dispatch
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 05:36:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:36:23 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:23 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:23 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 16 KiB/s wr, 2 op/s
Nov 29 05:36:24 compute-0 podman[263335]: 2025-11-29 05:36:24.061151203 +0000 UTC m=+0.108669189 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 05:36:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "format": "json"}]: dispatch
Nov 29 05:36:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9f9bf0ea-9f71-4161-881f-1c5e81eea943", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "format": "json"}]: dispatch
Nov 29 05:36:24 compute-0 ceph-mon[75176]: pgmap v959: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 16 KiB/s wr, 2 op/s
Nov 29 05:36:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:25 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 3 op/s
Nov 29 05:36:26 compute-0 nova_compute[254898]: 2025-11-29 05:36:26.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:36:27 compute-0 ceph-mon[75176]: pgmap v960: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 34 KiB/s wr, 3 op/s
Nov 29 05:36:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp'
Nov 29 05:36:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp' to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta'
Nov 29 05:36:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "format": "json"}]: dispatch
Nov 29 05:36:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:36:27 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:27 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 28 KiB/s wr, 2 op/s
Nov 29 05:36:28 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:28 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:36:28.261 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:36:28 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:36:28.262 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 05:36:28 compute-0 nova_compute[254898]: 2025-11-29 05:36:28.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:36:28 compute-0 nova_compute[254898]: 2025-11-29 05:36:28.965 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:36:29 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:29 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "format": "json"}]: dispatch
Nov 29 05:36:29 compute-0 ceph-mon[75176]: pgmap v961: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 28 KiB/s wr, 2 op/s
Nov 29 05:36:29 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "48601f02-9051-4603-a049-8748d3e87534", "format": "json"}]: dispatch
Nov 29 05:36:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:48601f02-9051-4603-a049-8748d3e87534, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:48601f02-9051-4603-a049-8748d3e87534, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:29 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48601f02-9051-4603-a049-8748d3e87534' of type subvolume
Nov 29 05:36:29 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:29.316+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '48601f02-9051-4603-a049-8748d3e87534' of type subvolume
Nov 29 05:36:29 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "48601f02-9051-4603-a049-8748d3e87534", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48601f02-9051-4603-a049-8748d3e87534, vol_name:cephfs) < ""
Nov 29 05:36:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/48601f02-9051-4603-a049-8748d3e87534'' moved to trashcan
Nov 29 05:36:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:36:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:48601f02-9051-4603-a049-8748d3e87534, vol_name:cephfs) < ""
Nov 29 05:36:29 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 43 KiB/s wr, 4 op/s
Nov 29 05:36:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:30 compute-0 nova_compute[254898]: 2025-11-29 05:36:30.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:36:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "48601f02-9051-4603-a049-8748d3e87534", "format": "json"}]: dispatch
Nov 29 05:36:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "48601f02-9051-4603-a049-8748d3e87534", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:31 compute-0 ceph-mon[75176]: pgmap v962: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 43 KiB/s wr, 4 op/s
Nov 29 05:36:31 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:36:31.264 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:36:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "snap_name": "28e850ab-7085-4183-b727-8c2173bcd1fc", "format": "json"}]: dispatch
Nov 29 05:36:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:31 compute-0 nova_compute[254898]: 2025-11-29 05:36:31.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:36:31 compute-0 nova_compute[254898]: 2025-11-29 05:36:31.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:36:31 compute-0 nova_compute[254898]: 2025-11-29 05:36:31.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:36:31 compute-0 nova_compute[254898]: 2025-11-29 05:36:31.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:36:31 compute-0 nova_compute[254898]: 2025-11-29 05:36:31.986 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:36:31 compute-0 nova_compute[254898]: 2025-11-29 05:36:31.987 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:36:31 compute-0 nova_compute[254898]: 2025-11-29 05:36:31.987 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:36:31 compute-0 nova_compute[254898]: 2025-11-29 05:36:31.988 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:36:31 compute-0 nova_compute[254898]: 2025-11-29 05:36:31.988 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:36:31 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 3 op/s
Nov 29 05:36:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:36:32 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/558752570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:36:32 compute-0 nova_compute[254898]: 2025-11-29 05:36:32.439 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:36:32 compute-0 nova_compute[254898]: 2025-11-29 05:36:32.595 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:36:32 compute-0 nova_compute[254898]: 2025-11-29 05:36:32.597 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5145MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:36:32 compute-0 nova_compute[254898]: 2025-11-29 05:36:32.597 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:36:32 compute-0 nova_compute[254898]: 2025-11-29 05:36:32.597 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:36:33 compute-0 nova_compute[254898]: 2025-11-29 05:36:33.024 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:36:33 compute-0 nova_compute[254898]: 2025-11-29 05:36:33.025 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:36:33 compute-0 podman[263383]: 2025-11-29 05:36:33.039132753 +0000 UTC m=+0.081621845 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 05:36:33 compute-0 nova_compute[254898]: 2025-11-29 05:36:33.052 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:36:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "snap_name": "28e850ab-7085-4183-b727-8c2173bcd1fc", "format": "json"}]: dispatch
Nov 29 05:36:33 compute-0 ceph-mon[75176]: pgmap v963: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 3 op/s
Nov 29 05:36:33 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/558752570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:36:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:36:33 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3442484796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:36:33 compute-0 nova_compute[254898]: 2025-11-29 05:36:33.482 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:36:33 compute-0 nova_compute[254898]: 2025-11-29 05:36:33.487 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:36:33 compute-0 nova_compute[254898]: 2025-11-29 05:36:33.506 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:36:33 compute-0 nova_compute[254898]: 2025-11-29 05:36:33.507 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:36:33 compute-0 nova_compute[254898]: 2025-11-29 05:36:33.508 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:36:33 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 3 op/s
Nov 29 05:36:34 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3442484796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:36:35 compute-0 ceph-mon[75176]: pgmap v964: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 33 KiB/s wr, 3 op/s
Nov 29 05:36:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:35 compute-0 nova_compute[254898]: 2025-11-29 05:36:35.503 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:36:35 compute-0 nova_compute[254898]: 2025-11-29 05:36:35.504 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:36:35 compute-0 nova_compute[254898]: 2025-11-29 05:36:35.505 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:36:35 compute-0 nova_compute[254898]: 2025-11-29 05:36:35.505 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:36:35 compute-0 nova_compute[254898]: 2025-11-29 05:36:35.527 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:36:35 compute-0 nova_compute[254898]: 2025-11-29 05:36:35.527 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:36:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp'
Nov 29 05:36:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp' to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta'
Nov 29 05:36:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "format": "json"}]: dispatch
Nov 29 05:36:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:36:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:35 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 39 KiB/s wr, 5 op/s
Nov 29 05:36:36 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:36 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "format": "json"}]: dispatch
Nov 29 05:36:36 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:36 compute-0 ceph-mon[75176]: pgmap v965: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 39 KiB/s wr, 5 op/s
Nov 29 05:36:37 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 05:36:39 compute-0 ceph-mon[75176]: pgmap v966: 305 pgs: 305 active+clean; 44 MiB data, 214 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 05:36:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "snap_name": "0abd9e26-18ae-42cb-9460-0ccd7be51363", "format": "json"}]: dispatch
Nov 29 05:36:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:39 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 33 KiB/s wr, 4 op/s
Nov 29 05:36:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "snap_name": "0abd9e26-18ae-42cb-9460-0ccd7be51363", "format": "json"}]: dispatch
Nov 29 05:36:41 compute-0 ceph-mon[75176]: pgmap v967: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 33 KiB/s wr, 4 op/s
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:36:41
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'backups']
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f9c22880>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f9c22730>)]
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:36:41 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Nov 29 05:36:43 compute-0 ceph-mon[75176]: pgmap v968: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Nov 29 05:36:43 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.csskcz(active, since 28m)
Nov 29 05:36:43 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Nov 29 05:36:44 compute-0 ceph-mon[75176]: mgrmap e13: compute-0.csskcz(active, since 28m)
Nov 29 05:36:44 compute-0 sudo[263426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:36:44 compute-0 sudo[263426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:44 compute-0 sudo[263426]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "snap_name": "0abd9e26-18ae-42cb-9460-0ccd7be51363_ee139701-4c1f-4617-b59a-7c6029149324", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363_ee139701-4c1f-4617-b59a-7c6029149324, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp'
Nov 29 05:36:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp' to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta'
Nov 29 05:36:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363_ee139701-4c1f-4617-b59a-7c6029149324, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "snap_name": "0abd9e26-18ae-42cb-9460-0ccd7be51363", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp'
Nov 29 05:36:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta.tmp' to config b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4/.meta'
Nov 29 05:36:44 compute-0 sudo[263451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:36:44 compute-0 sudo[263451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:44 compute-0 sudo[263451]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0abd9e26-18ae-42cb-9460-0ccd7be51363, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:44 compute-0 sudo[263476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:36:44 compute-0 sudo[263476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:44 compute-0 sudo[263476]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:44 compute-0 sudo[263501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:36:44 compute-0 sudo[263501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:45 compute-0 ceph-mon[75176]: pgmap v969: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 18 KiB/s wr, 2 op/s
Nov 29 05:36:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:45 compute-0 sudo[263501]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:36:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:36:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:36:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:36:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:36:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:36:45 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d753479f-197f-461a-8f0e-4387ea667c0c does not exist
Nov 29 05:36:45 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev e00f2512-4242-410f-90fb-9cc597465dae does not exist
Nov 29 05:36:45 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a48fe57b-3d18-40c8-93fe-f8be926f6627 does not exist
Nov 29 05:36:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:36:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:36:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:36:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:36:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:36:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:36:45 compute-0 sudo[263557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:36:45 compute-0 sudo[263557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:45 compute-0 sudo[263557]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:45 compute-0 sudo[263582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:36:45 compute-0 sudo[263582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:45 compute-0 sudo[263582]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:45 compute-0 sudo[263607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:36:45 compute-0 sudo[263607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:45 compute-0 sudo[263607]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:45 compute-0 sudo[263632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:36:45 compute-0 sudo[263632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:45 compute-0 podman[263700]: 2025-11-29 05:36:45.93910617 +0000 UTC m=+0.063311852 container create d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:36:45 compute-0 systemd[1]: Started libpod-conmon-d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47.scope.
Nov 29 05:36:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 05:36:45 compute-0 podman[263700]: 2025-11-29 05:36:45.906039099 +0000 UTC m=+0.030244761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:36:45 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 05:36:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:36:46 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/63e32269-5cd1-4b91-be8c-8e96abc0fca0/.meta.tmp'
Nov 29 05:36:46 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/63e32269-5cd1-4b91-be8c-8e96abc0fca0/.meta.tmp' to config b'/volumes/_nogroup/63e32269-5cd1-4b91-be8c-8e96abc0fca0/.meta'
Nov 29 05:36:46 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 05:36:46 compute-0 podman[263700]: 2025-11-29 05:36:46.016607074 +0000 UTC m=+0.140812656 container init d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:36:46 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "format": "json"}]: dispatch
Nov 29 05:36:46 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 05:36:46 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 05:36:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:36:46 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:46 compute-0 podman[263700]: 2025-11-29 05:36:46.025520989 +0000 UTC m=+0.149726551 container start d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:36:46 compute-0 podman[263700]: 2025-11-29 05:36:46.028514372 +0000 UTC m=+0.152719944 container attach d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:36:46 compute-0 eloquent_carson[263716]: 167 167
Nov 29 05:36:46 compute-0 systemd[1]: libpod-d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47.scope: Deactivated successfully.
Nov 29 05:36:46 compute-0 conmon[263716]: conmon d590007deb3a7207aa4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47.scope/container/memory.events
Nov 29 05:36:46 compute-0 podman[263721]: 2025-11-29 05:36:46.088337028 +0000 UTC m=+0.035354915 container died d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:36:46 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "snap_name": "0abd9e26-18ae-42cb-9460-0ccd7be51363_ee139701-4c1f-4617-b59a-7c6029149324", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:46 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "snap_name": "0abd9e26-18ae-42cb-9460-0ccd7be51363", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:36:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:36:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:36:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:36:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:36:46 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:36:46 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4c2d3a5cc1357250fc8f19426ea2fffdd20e784d5e4572e79cdf449734d4404-merged.mount: Deactivated successfully.
Nov 29 05:36:46 compute-0 podman[263721]: 2025-11-29 05:36:46.123703775 +0000 UTC m=+0.070721662 container remove d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:36:46 compute-0 systemd[1]: libpod-conmon-d590007deb3a7207aa4f7f5f8c59132dd2e3c06daacd1db4204bd6f2d0430a47.scope: Deactivated successfully.
Nov 29 05:36:46 compute-0 podman[263743]: 2025-11-29 05:36:46.295488309 +0000 UTC m=+0.039462815 container create a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:36:46 compute-0 systemd[1]: Started libpod-conmon-a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee.scope.
Nov 29 05:36:46 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:46 compute-0 podman[263743]: 2025-11-29 05:36:46.372501802 +0000 UTC m=+0.116476308 container init a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:36:46 compute-0 podman[263743]: 2025-11-29 05:36:46.279570285 +0000 UTC m=+0.023544771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:36:46 compute-0 podman[263743]: 2025-11-29 05:36:46.382285419 +0000 UTC m=+0.126259885 container start a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:36:46 compute-0 podman[263743]: 2025-11-29 05:36:46.385548318 +0000 UTC m=+0.129522794 container attach a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:36:47 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:47 compute-0 ceph-mon[75176]: pgmap v970: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 3 op/s
Nov 29 05:36:47 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "format": "json"}]: dispatch
Nov 29 05:36:47 compute-0 happy_bhaskara[263760]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:36:47 compute-0 happy_bhaskara[263760]: --> relative data size: 1.0
Nov 29 05:36:47 compute-0 happy_bhaskara[263760]: --> All data devices are unavailable
Nov 29 05:36:47 compute-0 systemd[1]: libpod-a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee.scope: Deactivated successfully.
Nov 29 05:36:47 compute-0 podman[263789]: 2025-11-29 05:36:47.459896554 +0000 UTC m=+0.028463029 container died a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:36:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "snap_name": "28e850ab-7085-4183-b727-8c2173bcd1fc_a1b607a0-ec1d-4d0c-aa68-b82ad11908c2", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc_a1b607a0-ec1d-4d0c-aa68-b82ad11908c2, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp'
Nov 29 05:36:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp' to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta'
Nov 29 05:36:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc_a1b607a0-ec1d-4d0c-aa68-b82ad11908c2, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "snap_name": "28e850ab-7085-4183-b727-8c2173bcd1fc", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a6f1e4c0e9942cc9d6764c3d5465dfe55233286b9abdfc8fe026a585f24eca7-merged.mount: Deactivated successfully.
Nov 29 05:36:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp'
Nov 29 05:36:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta.tmp' to config b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c/.meta'
Nov 29 05:36:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:28e850ab-7085-4183-b727-8c2173bcd1fc, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:47 compute-0 podman[263789]: 2025-11-29 05:36:47.530965104 +0000 UTC m=+0.099531509 container remove a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_bhaskara, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 05:36:47 compute-0 systemd[1]: libpod-conmon-a13c8e3a0ec4d411a13936b05b06517757c048377e510a55624152cbbe86bdee.scope: Deactivated successfully.
Nov 29 05:36:47 compute-0 sudo[263632]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:47 compute-0 sudo[263805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:36:47 compute-0 sudo[263805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:47 compute-0 sudo[263805]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:47 compute-0 sudo[263830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:36:47 compute-0 sudo[263830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:47 compute-0 sudo[263830]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:47 compute-0 sudo[263855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:36:47 compute-0 sudo[263855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:47 compute-0 sudo[263855]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:47 compute-0 sudo[263880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:36:47 compute-0 sudo[263880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 2 op/s
Nov 29 05:36:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 29 05:36:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 05:36:48 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 29 05:36:48 compute-0 podman[263945]: 2025-11-29 05:36:48.27954449 +0000 UTC m=+0.068492098 container create 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:36:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "740417e2-7402-40fb-a24e-d743db894fa4", "format": "json"}]: dispatch
Nov 29 05:36:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:740417e2-7402-40fb-a24e-d743db894fa4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:48 compute-0 systemd[1]: Started libpod-conmon-3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52.scope.
Nov 29 05:36:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:740417e2-7402-40fb-a24e-d743db894fa4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:48 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:48.335+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '740417e2-7402-40fb-a24e-d743db894fa4' of type subvolume
Nov 29 05:36:48 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '740417e2-7402-40fb-a24e-d743db894fa4' of type subvolume
Nov 29 05:36:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:48 compute-0 podman[263945]: 2025-11-29 05:36:48.254240968 +0000 UTC m=+0.043188656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:36:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/740417e2-7402-40fb-a24e-d743db894fa4'' moved to trashcan
Nov 29 05:36:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:36:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:740417e2-7402-40fb-a24e-d743db894fa4, vol_name:cephfs) < ""
Nov 29 05:36:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:36:48 compute-0 podman[263945]: 2025-11-29 05:36:48.381370703 +0000 UTC m=+0.170318401 container init 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:36:48 compute-0 podman[263945]: 2025-11-29 05:36:48.391735053 +0000 UTC m=+0.180682691 container start 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 29 05:36:48 compute-0 podman[263945]: 2025-11-29 05:36:48.39612536 +0000 UTC m=+0.185072998 container attach 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 05:36:48 compute-0 serene_aryabhata[263961]: 167 167
Nov 29 05:36:48 compute-0 systemd[1]: libpod-3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52.scope: Deactivated successfully.
Nov 29 05:36:48 compute-0 podman[263945]: 2025-11-29 05:36:48.399648015 +0000 UTC m=+0.188595693 container died 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:36:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-71d5b83171d2bf1e37208ebd02c221c70c0ab24f376c4be24b6df6229dd2491c-merged.mount: Deactivated successfully.
Nov 29 05:36:48 compute-0 podman[263945]: 2025-11-29 05:36:48.443960317 +0000 UTC m=+0.232907905 container remove 3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 05:36:48 compute-0 systemd[1]: libpod-conmon-3778b336711255a24b2d4d334065e67a7fa9ca60f0cf849f0645070f83813c52.scope: Deactivated successfully.
Nov 29 05:36:48 compute-0 podman[263985]: 2025-11-29 05:36:48.653541946 +0000 UTC m=+0.059562791 container create 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 05:36:48 compute-0 systemd[1]: Started libpod-conmon-85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016.scope.
Nov 29 05:36:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400f8cf24e4f0279d1817c650d85a065fa2dd75098b4e00f67bbf410bbcb18f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400f8cf24e4f0279d1817c650d85a065fa2dd75098b4e00f67bbf410bbcb18f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400f8cf24e4f0279d1817c650d85a065fa2dd75098b4e00f67bbf410bbcb18f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400f8cf24e4f0279d1817c650d85a065fa2dd75098b4e00f67bbf410bbcb18f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:48 compute-0 podman[263985]: 2025-11-29 05:36:48.630241273 +0000 UTC m=+0.036262188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:36:48 compute-0 podman[263985]: 2025-11-29 05:36:48.727856063 +0000 UTC m=+0.133876928 container init 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:36:48 compute-0 podman[263985]: 2025-11-29 05:36:48.735297504 +0000 UTC m=+0.141318329 container start 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:36:48 compute-0 podman[263985]: 2025-11-29 05:36:48.738205304 +0000 UTC m=+0.144226169 container attach 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 05:36:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 29 05:36:49 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "snap_name": "28e850ab-7085-4183-b727-8c2173bcd1fc_a1b607a0-ec1d-4d0c-aa68-b82ad11908c2", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:49 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "snap_name": "28e850ab-7085-4183-b727-8c2173bcd1fc", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:49 compute-0 ceph-mon[75176]: pgmap v971: 305 pgs: 305 active+clean; 44 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 2 op/s
Nov 29 05:36:49 compute-0 ceph-mon[75176]: osdmap e138: 3 total, 3 up, 3 in
Nov 29 05:36:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 05:36:49 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 05:36:49 compute-0 determined_hugle[264002]: {
Nov 29 05:36:49 compute-0 determined_hugle[264002]:     "0": [
Nov 29 05:36:49 compute-0 determined_hugle[264002]:         {
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "devices": [
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "/dev/loop3"
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             ],
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_name": "ceph_lv0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_size": "21470642176",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "name": "ceph_lv0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "tags": {
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.cluster_name": "ceph",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.crush_device_class": "",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.encrypted": "0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.osd_id": "0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.type": "block",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.vdo": "0"
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             },
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "type": "block",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "vg_name": "ceph_vg0"
Nov 29 05:36:49 compute-0 determined_hugle[264002]:         }
Nov 29 05:36:49 compute-0 determined_hugle[264002]:     ],
Nov 29 05:36:49 compute-0 determined_hugle[264002]:     "1": [
Nov 29 05:36:49 compute-0 determined_hugle[264002]:         {
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "devices": [
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "/dev/loop4"
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             ],
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_name": "ceph_lv1",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_size": "21470642176",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "name": "ceph_lv1",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "tags": {
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.cluster_name": "ceph",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.crush_device_class": "",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.encrypted": "0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.osd_id": "1",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.type": "block",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.vdo": "0"
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             },
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "type": "block",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "vg_name": "ceph_vg1"
Nov 29 05:36:49 compute-0 determined_hugle[264002]:         }
Nov 29 05:36:49 compute-0 determined_hugle[264002]:     ],
Nov 29 05:36:49 compute-0 determined_hugle[264002]:     "2": [
Nov 29 05:36:49 compute-0 determined_hugle[264002]:         {
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "devices": [
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "/dev/loop5"
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             ],
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_name": "ceph_lv2",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_size": "21470642176",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "name": "ceph_lv2",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "tags": {
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.cluster_name": "ceph",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.crush_device_class": "",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.encrypted": "0",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.osd_id": "2",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.type": "block",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:                 "ceph.vdo": "0"
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             },
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "type": "block",
Nov 29 05:36:49 compute-0 determined_hugle[264002]:             "vg_name": "ceph_vg2"
Nov 29 05:36:49 compute-0 determined_hugle[264002]:         }
Nov 29 05:36:49 compute-0 determined_hugle[264002]:     ]
Nov 29 05:36:49 compute-0 determined_hugle[264002]: }
Nov 29 05:36:49 compute-0 systemd[1]: libpod-85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016.scope: Deactivated successfully.
Nov 29 05:36:49 compute-0 podman[263985]: 2025-11-29 05:36:49.471699836 +0000 UTC m=+0.877720701 container died 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:36:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-400f8cf24e4f0279d1817c650d85a065fa2dd75098b4e00f67bbf410bbcb18f4-merged.mount: Deactivated successfully.
Nov 29 05:36:49 compute-0 podman[263985]: 2025-11-29 05:36:49.527540917 +0000 UTC m=+0.933561732 container remove 85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:36:49 compute-0 systemd[1]: libpod-conmon-85750c053cce98b32d48d67e8542c4a749a896516afb4c7d716df30e832fd016.scope: Deactivated successfully.
Nov 29 05:36:49 compute-0 sudo[263880]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:49 compute-0 sudo[264023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:36:49 compute-0 sudo[264023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:49 compute-0 sudo[264023]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:49 compute-0 sudo[264048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:36:49 compute-0 sudo[264048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:49 compute-0 sudo[264048]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:49 compute-0 sudo[264073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:36:49 compute-0 sudo[264073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:49 compute-0 sudo[264073]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:49 compute-0 sudo[264098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:36:49 compute-0 sudo[264098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 46 KiB/s wr, 5 op/s
Nov 29 05:36:50 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "740417e2-7402-40fb-a24e-d743db894fa4", "format": "json"}]: dispatch
Nov 29 05:36:50 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "740417e2-7402-40fb-a24e-d743db894fa4", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:50 compute-0 ceph-mon[75176]: osdmap e139: 3 total, 3 up, 3 in
Nov 29 05:36:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:50 compute-0 podman[264162]: 2025-11-29 05:36:50.284414444 +0000 UTC m=+0.044333443 container create a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:36:50 compute-0 systemd[1]: Started libpod-conmon-a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7.scope.
Nov 29 05:36:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:36:50 compute-0 podman[264162]: 2025-11-29 05:36:50.352477301 +0000 UTC m=+0.112396290 container init a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:36:50 compute-0 podman[264162]: 2025-11-29 05:36:50.259249805 +0000 UTC m=+0.019168794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:36:50 compute-0 podman[264162]: 2025-11-29 05:36:50.365621508 +0000 UTC m=+0.125540527 container start a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 05:36:50 compute-0 keen_williamson[264180]: 167 167
Nov 29 05:36:50 compute-0 systemd[1]: libpod-a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7.scope: Deactivated successfully.
Nov 29 05:36:50 compute-0 podman[264162]: 2025-11-29 05:36:50.37061978 +0000 UTC m=+0.130538799 container attach a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 05:36:50 compute-0 podman[264162]: 2025-11-29 05:36:50.370968698 +0000 UTC m=+0.130887677 container died a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:36:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaebebc1b7f41e3b248145c45b13ac5ec1be0522060618289bd777af84bacefd-merged.mount: Deactivated successfully.
Nov 29 05:36:50 compute-0 podman[264162]: 2025-11-29 05:36:50.419912122 +0000 UTC m=+0.179831101 container remove a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_williamson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 05:36:50 compute-0 systemd[1]: libpod-conmon-a58f587c7170f5d837bee41f04187d622fbf2d22720cf65d9c1d7c74a591cae7.scope: Deactivated successfully.
Nov 29 05:36:50 compute-0 podman[264177]: 2025-11-29 05:36:50.434990657 +0000 UTC m=+0.098260098 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 05:36:50 compute-0 podman[264224]: 2025-11-29 05:36:50.582048234 +0000 UTC m=+0.045467992 container create 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:36:50 compute-0 systemd[1]: Started libpod-conmon-718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879.scope.
Nov 29 05:36:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:36:50 compute-0 podman[264224]: 2025-11-29 05:36:50.560178034 +0000 UTC m=+0.023597872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69653d6ad70d80540d28cdc4a61e544b3eb93da853b7eca6bee76cc8e62d35af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69653d6ad70d80540d28cdc4a61e544b3eb93da853b7eca6bee76cc8e62d35af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69653d6ad70d80540d28cdc4a61e544b3eb93da853b7eca6bee76cc8e62d35af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69653d6ad70d80540d28cdc4a61e544b3eb93da853b7eca6bee76cc8e62d35af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:36:50 compute-0 podman[264224]: 2025-11-29 05:36:50.664914078 +0000 UTC m=+0.128333846 container init 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:36:50 compute-0 podman[264224]: 2025-11-29 05:36:50.674959271 +0000 UTC m=+0.138379029 container start 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:36:50 compute-0 podman[264224]: 2025-11-29 05:36:50.677481572 +0000 UTC m=+0.140901330 container attach 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "format": "json"}]: dispatch
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:50 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:50.822+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '63e32269-5cd1-4b91-be8c-8e96abc0fca0' of type subvolume
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '63e32269-5cd1-4b91-be8c-8e96abc0fca0' of type subvolume
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/63e32269-5cd1-4b91-be8c-8e96abc0fca0'' moved to trashcan
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:63e32269-5cd1-4b91-be8c-8e96abc0fca0, vol_name:cephfs) < ""
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "format": "json"}]: dispatch
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:50 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:50.996+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff968508-e63c-4125-8d0a-ffeca3c4312c' of type subvolume
Nov 29 05:36:50 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff968508-e63c-4125-8d0a-ffeca3c4312c' of type subvolume
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff968508-e63c-4125-8d0a-ffeca3c4312c'' moved to trashcan
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff968508-e63c-4125-8d0a-ffeca3c4312c, vol_name:cephfs) < ""
Nov 29 05:36:51 compute-0 ceph-mon[75176]: pgmap v974: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 46 KiB/s wr, 5 op/s
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.513314368040633e-05 of space, bias 4.0, pg target 0.06615977241648759 quantized to 16 (current 16)
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]: {
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "osd_id": 0,
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "type": "bluestore"
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:     },
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "osd_id": 1,
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "type": "bluestore"
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:     },
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "osd_id": 2,
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:         "type": "bluestore"
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]:     }
Nov 29 05:36:51 compute-0 nostalgic_burnell[264241]: }
Nov 29 05:36:51 compute-0 systemd[1]: libpod-718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879.scope: Deactivated successfully.
Nov 29 05:36:51 compute-0 podman[264224]: 2025-11-29 05:36:51.609101266 +0000 UTC m=+1.072521024 container died 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 05:36:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-69653d6ad70d80540d28cdc4a61e544b3eb93da853b7eca6bee76cc8e62d35af-merged.mount: Deactivated successfully.
Nov 29 05:36:51 compute-0 podman[264224]: 2025-11-29 05:36:51.661276138 +0000 UTC m=+1.124695896 container remove 718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:36:51 compute-0 systemd[1]: libpod-conmon-718fe4d75231fe14fcd7fab3c361bfd07a8d99ce29e5505fbc0722cb598b0879.scope: Deactivated successfully.
Nov 29 05:36:51 compute-0 sudo[264098]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:36:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:36:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:36:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev b9dc8a88-f02c-447c-9953-2027921c39f8 does not exist
Nov 29 05:36:51 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev dacc5641-1f2e-4325-b99a-03731387d314 does not exist
Nov 29 05:36:51 compute-0 sudo[264287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:36:51 compute-0 sudo[264287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:51 compute-0 sudo[264287]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:51 compute-0 sudo[264312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:36:51 compute-0 sudo[264312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:36:51 compute-0 sudo[264312]: pam_unix(sudo:session): session closed for user root
Nov 29 05:36:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 46 KiB/s wr, 6 op/s
Nov 29 05:36:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "format": "json"}]: dispatch
Nov 29 05:36:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "63e32269-5cd1-4b91-be8c-8e96abc0fca0", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "format": "json"}]: dispatch
Nov 29 05:36:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff968508-e63c-4125-8d0a-ffeca3c4312c", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:36:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:36:53 compute-0 ceph-mon[75176]: pgmap v975: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 46 KiB/s wr, 6 op/s
Nov 29 05:36:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 41 KiB/s wr, 5 op/s
Nov 29 05:36:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "format": "json"}]: dispatch
Nov 29 05:36:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:36:55 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:36:55.145+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '70f1f9a5-b960-4859-afd1-e8403dcbe455' of type subvolume
Nov 29 05:36:55 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '70f1f9a5-b960-4859-afd1-e8403dcbe455' of type subvolume
Nov 29 05:36:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 05:36:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/70f1f9a5-b960-4859-afd1-e8403dcbe455'' moved to trashcan
Nov 29 05:36:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:36:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70f1f9a5-b960-4859-afd1-e8403dcbe455, vol_name:cephfs) < ""
Nov 29 05:36:55 compute-0 ceph-mon[75176]: pgmap v976: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 41 KiB/s wr, 5 op/s
Nov 29 05:36:55 compute-0 podman[264337]: 2025-11-29 05:36:55.21648869 +0000 UTC m=+0.248468320 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 05:36:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:36:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 29 05:36:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 29 05:36:55 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 29 05:36:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 74 KiB/s wr, 8 op/s
Nov 29 05:36:56 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "format": "json"}]: dispatch
Nov 29 05:36:56 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "70f1f9a5-b960-4859-afd1-e8403dcbe455", "force": true, "format": "json"}]: dispatch
Nov 29 05:36:56 compute-0 ceph-mon[75176]: osdmap e140: 3 total, 3 up, 3 in
Nov 29 05:36:57 compute-0 ceph-mon[75176]: pgmap v978: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 74 KiB/s wr, 8 op/s
Nov 29 05:36:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 923 B/s rd, 66 KiB/s wr, 7 op/s
Nov 29 05:36:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:36:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 05:36:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c/.meta.tmp'
Nov 29 05:36:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c/.meta.tmp' to config b'/volumes/_nogroup/bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c/.meta'
Nov 29 05:36:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 05:36:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "format": "json"}]: dispatch
Nov 29 05:36:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 05:36:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 05:36:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:36:58 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:59 compute-0 ceph-mon[75176]: pgmap v979: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 923 B/s rd, 66 KiB/s wr, 7 op/s
Nov 29 05:36:59 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.185166) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619185203, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1626, "num_deletes": 255, "total_data_size": 2400312, "memory_usage": 2452144, "flush_reason": "Manual Compaction"}
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619199418, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2364297, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19517, "largest_seqno": 21142, "table_properties": {"data_size": 2356443, "index_size": 4604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17820, "raw_average_key_size": 21, "raw_value_size": 2340197, "raw_average_value_size": 2762, "num_data_blocks": 205, "num_entries": 847, "num_filter_entries": 847, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394500, "oldest_key_time": 1764394500, "file_creation_time": 1764394619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 14318 microseconds, and 7215 cpu microseconds.
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.199482) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2364297 bytes OK
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.199508) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.201618) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.201648) EVENT_LOG_v1 {"time_micros": 1764394619201638, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.201675) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2392911, prev total WAL file size 2392911, number of live WAL files 2.
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.202946) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2308KB)], [47(6964KB)]
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619203075, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9496071, "oldest_snapshot_seqno": -1}
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4483 keys, 7742953 bytes, temperature: kUnknown
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619259053, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7742953, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7711812, "index_size": 18807, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 111181, "raw_average_key_size": 24, "raw_value_size": 7629693, "raw_average_value_size": 1701, "num_data_blocks": 784, "num_entries": 4483, "num_filter_entries": 4483, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.259415) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7742953 bytes
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.261841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.4 rd, 138.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.8 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.3) write-amplify(3.3) OK, records in: 5008, records dropped: 525 output_compression: NoCompression
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.261869) EVENT_LOG_v1 {"time_micros": 1764394619261856, "job": 24, "event": "compaction_finished", "compaction_time_micros": 56064, "compaction_time_cpu_micros": 28828, "output_level": 6, "num_output_files": 1, "total_output_size": 7742953, "num_input_records": 5008, "num_output_records": 4483, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619262802, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394619264974, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.202781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.265019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.265025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.265028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.265030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:36:59 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:36:59.265033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:37:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 5 op/s
Nov 29 05:37:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "format": "json"}]: dispatch
Nov 29 05:37:00 compute-0 ceph-mon[75176]: pgmap v980: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 5 op/s
Nov 29 05:37:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:01 compute-0 sshd-session[264363]: Invalid user user1 from 152.32.145.111 port 50708
Nov 29 05:37:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 4 op/s
Nov 29 05:37:02 compute-0 sshd-session[264363]: Received disconnect from 152.32.145.111 port 50708:11: Bye Bye [preauth]
Nov 29 05:37:02 compute-0 sshd-session[264363]: Disconnected from invalid user user1 152.32.145.111 port 50708 [preauth]
Nov 29 05:37:03 compute-0 ceph-mon[75176]: pgmap v981: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 4 op/s
Nov 29 05:37:03 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "25c968fa-209f-495f-aace-23679fada541", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 05:37:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/25c968fa-209f-495f-aace-23679fada541/.meta.tmp'
Nov 29 05:37:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/25c968fa-209f-495f-aace-23679fada541/.meta.tmp' to config b'/volumes/_nogroup/25c968fa-209f-495f-aace-23679fada541/.meta'
Nov 29 05:37:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 05:37:03 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "25c968fa-209f-495f-aace-23679fada541", "format": "json"}]: dispatch
Nov 29 05:37:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 05:37:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 05:37:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:37:03 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 4 op/s
Nov 29 05:37:04 compute-0 podman[264365]: 2025-11-29 05:37:04.058909733 +0000 UTC m=+0.098011311 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 05:37:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "25c968fa-209f-495f-aace-23679fada541", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "25c968fa-209f-495f-aace-23679fada541", "format": "json"}]: dispatch
Nov 29 05:37:04 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:05 compute-0 ceph-mon[75176]: pgmap v982: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 44 KiB/s wr, 4 op/s
Nov 29 05:37:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 190 B/s rd, 28 KiB/s wr, 3 op/s
Nov 29 05:37:07 compute-0 ceph-mon[75176]: pgmap v983: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 190 B/s rd, 28 KiB/s wr, 3 op/s
Nov 29 05:37:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 25 KiB/s wr, 2 op/s
Nov 29 05:37:09 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "25c968fa-209f-495f-aace-23679fada541", "format": "json"}]: dispatch
Nov 29 05:37:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:25c968fa-209f-495f-aace-23679fada541, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:37:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:25c968fa-209f-495f-aace-23679fada541, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:37:09 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:37:09.099+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '25c968fa-209f-495f-aace-23679fada541' of type subvolume
Nov 29 05:37:09 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '25c968fa-209f-495f-aace-23679fada541' of type subvolume
Nov 29 05:37:09 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "25c968fa-209f-495f-aace-23679fada541", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 05:37:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/25c968fa-209f-495f-aace-23679fada541'' moved to trashcan
Nov 29 05:37:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:37:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:25c968fa-209f-495f-aace-23679fada541, vol_name:cephfs) < ""
Nov 29 05:37:09 compute-0 ceph-mon[75176]: pgmap v984: 305 pgs: 305 active+clean; 45 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 25 KiB/s wr, 2 op/s
Nov 29 05:37:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 3 op/s
Nov 29 05:37:10 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "25c968fa-209f-495f-aace-23679fada541", "format": "json"}]: dispatch
Nov 29 05:37:10 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "25c968fa-209f-495f-aace-23679fada541", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:11 compute-0 ceph-mon[75176]: pgmap v985: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 3 op/s
Nov 29 05:37:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:37:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:37:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:37:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:37:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:37:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:37:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 2 op/s
Nov 29 05:37:12 compute-0 ceph-mon[75176]: pgmap v986: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 2 op/s
Nov 29 05:37:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:37:13.751 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:37:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:37:13.751 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:37:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:37:13.752 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:37:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 1 op/s
Nov 29 05:37:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:37:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1226965972' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:37:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:37:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1226965972' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:37:15 compute-0 ceph-mon[75176]: pgmap v987: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s wr, 1 op/s
Nov 29 05:37:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1226965972' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:37:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1226965972' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:37:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 3 op/s
Nov 29 05:37:17 compute-0 ceph-mon[75176]: pgmap v988: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 3 op/s
Nov 29 05:37:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp'
Nov 29 05:37:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp' to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta'
Nov 29 05:37:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "format": "json"}]: dispatch
Nov 29 05:37:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:37:17 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 19 KiB/s wr, 1 op/s
Nov 29 05:37:18 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "format": "json"}]: dispatch
Nov 29 05:37:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:37:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:37:18 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:37:18.528+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c' of type subvolume
Nov 29 05:37:18 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c' of type subvolume
Nov 29 05:37:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 05:37:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c'' moved to trashcan
Nov 29 05:37:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:37:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c, vol_name:cephfs) < ""
Nov 29 05:37:19 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:19 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:19 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/.meta.tmp'
Nov 29 05:37:19 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/.meta.tmp' to config b'/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/.meta'
Nov 29 05:37:19 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:19 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "format": "json"}]: dispatch
Nov 29 05:37:19 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:19 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:37:19 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "format": "json"}]: dispatch
Nov 29 05:37:19 compute-0 ceph-mon[75176]: pgmap v989: 305 pgs: 305 active+clean; 45 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 19 KiB/s wr, 1 op/s
Nov 29 05:37:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "format": "json"}]: dispatch
Nov 29 05:37:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bf9ab1fa-bd1c-475a-9ef3-389a03e28e9c", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 3 op/s
Nov 29 05:37:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:20 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:20 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "format": "json"}]: dispatch
Nov 29 05:37:20 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:20 compute-0 ceph-mon[75176]: pgmap v990: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 3 op/s
Nov 29 05:37:21 compute-0 podman[264384]: 2025-11-29 05:37:21.0125542 +0000 UTC m=+0.064415339 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 05:37:21 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "53de9ce2-17a6-4f82-8906-ba34ad0ed34d", "format": "json"}]: dispatch
Nov 29 05:37:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 2 op/s
Nov 29 05:37:22 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "c5bea340-145f-4db4-98d1-96c3624358f6", "format": "json"}]: dispatch
Nov 29 05:37:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:37:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:37:23 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:37:23 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:37:23 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "53de9ce2-17a6-4f82-8906-ba34ad0ed34d", "format": "json"}]: dispatch
Nov 29 05:37:23 compute-0 ceph-mon[75176]: pgmap v991: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 2 op/s
Nov 29 05:37:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:37:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:37:23 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:23 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 2 op/s
Nov 29 05:37:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "c5bea340-145f-4db4-98d1-96c3624358f6", "format": "json"}]: dispatch
Nov 29 05:37:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:37:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:24 compute-0 sshd-session[264404]: Invalid user tibero from 45.120.216.232 port 35462
Nov 29 05:37:24 compute-0 sshd-session[264404]: Received disconnect from 45.120.216.232 port 35462:11: Bye Bye [preauth]
Nov 29 05:37:24 compute-0 sshd-session[264404]: Disconnected from invalid user tibero 45.120.216.232 port 35462 [preauth]
Nov 29 05:37:25 compute-0 ceph-mon[75176]: pgmap v992: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 22 KiB/s wr, 2 op/s
Nov 29 05:37:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 41 KiB/s wr, 5 op/s
Nov 29 05:37:26 compute-0 podman[264406]: 2025-11-29 05:37:26.051914264 +0000 UTC m=+0.093310218 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:37:26 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:37:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 05:37:26 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:37:26 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:37:26 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "c5bea340-145f-4db4-98d1-96c3624358f6_052f518b-49ad-41ce-af4d-007b0f475cae", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6_052f518b-49ad-41ce-af4d-007b0f475cae, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp'
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp' to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta'
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6_052f518b-49ad-41ce-af4d-007b0f475cae, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "c5bea340-145f-4db4-98d1-96c3624358f6", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp'
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp' to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta'
Nov 29 05:37:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c5bea340-145f-4db4-98d1-96c3624358f6, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:27 compute-0 ceph-mon[75176]: pgmap v993: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 41 KiB/s wr, 5 op/s
Nov 29 05:37:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:37:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:37:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:37:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 4 op/s
Nov 29 05:37:28 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:37:28 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:37:28 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "c5bea340-145f-4db4-98d1-96c3624358f6_052f518b-49ad-41ce-af4d-007b0f475cae", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:28 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "c5bea340-145f-4db4-98d1-96c3624358f6", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:28 compute-0 nova_compute[254898]: 2025-11-29 05:37:28.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:37:29 compute-0 ceph-mon[75176]: pgmap v994: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 36 KiB/s wr, 4 op/s
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 55 KiB/s wr, 6 op/s
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:37:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:37:30 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:37:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:37:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "53de9ce2-17a6-4f82-8906-ba34ad0ed34d_348a3764-7ba4-4077-85c8-2f2a979915c1", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d_348a3764-7ba4-4077-85c8-2f2a979915c1, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp'
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp' to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta'
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d_348a3764-7ba4-4077-85c8-2f2a979915c1, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "53de9ce2-17a6-4f82-8906-ba34ad0ed34d", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp'
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta.tmp' to config b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c/.meta'
Nov 29 05:37:30 compute-0 ceph-mon[75176]: pgmap v995: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 55 KiB/s wr, 6 op/s
Nov 29 05:37:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:37:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:53de9ce2-17a6-4f82-8906-ba34ad0ed34d, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:30 compute-0 nova_compute[254898]: 2025-11-29 05:37:30.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:37:30 compute-0 nova_compute[254898]: 2025-11-29 05:37:30.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:37:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:37:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "53de9ce2-17a6-4f82-8906-ba34ad0ed34d_348a3764-7ba4-4077-85c8-2f2a979915c1", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "snap_name": "53de9ce2-17a6-4f82-8906-ba34ad0ed34d", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:31 compute-0 nova_compute[254898]: 2025-11-29 05:37:31.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:37:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 5 op/s
Nov 29 05:37:32 compute-0 ceph-mon[75176]: pgmap v996: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 39 KiB/s wr, 5 op/s
Nov 29 05:37:32 compute-0 nova_compute[254898]: 2025-11-29 05:37:32.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:37:32 compute-0 nova_compute[254898]: 2025-11-29 05:37:32.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:37:33 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:37:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 05:37:33 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:37:33 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:37:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:37:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:37:33 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 29 05:37:33 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 29 05:37:33 compute-0 nova_compute[254898]: 2025-11-29 05:37:33.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a9634de1-2230-40f8-a094-82f46777a70c", "format": "json"}]: dispatch
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a9634de1-2230-40f8-a094-82f46777a70c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:37:33 compute-0 nova_compute[254898]: 2025-11-29 05:37:33.980 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:37:33 compute-0 nova_compute[254898]: 2025-11-29 05:37:33.981 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a9634de1-2230-40f8-a094-82f46777a70c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:37:33 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:37:33.980+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a9634de1-2230-40f8-a094-82f46777a70c' of type subvolume
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a9634de1-2230-40f8-a094-82f46777a70c' of type subvolume
Nov 29 05:37:33 compute-0 nova_compute[254898]: 2025-11-29 05:37:33.981 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:37:33 compute-0 nova_compute[254898]: 2025-11-29 05:37:33.981 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:37:33 compute-0 nova_compute[254898]: 2025-11-29 05:37:33.982 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a9634de1-2230-40f8-a094-82f46777a70c'' moved to trashcan
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:37:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a9634de1-2230-40f8-a094-82f46777a70c, vol_name:cephfs) < ""
Nov 29 05:37:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 47 KiB/s wr, 6 op/s
Nov 29 05:37:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:37:34 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236421173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:37:34 compute-0 nova_compute[254898]: 2025-11-29 05:37:34.421 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:37:34 compute-0 nova_compute[254898]: 2025-11-29 05:37:34.616 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:37:34 compute-0 nova_compute[254898]: 2025-11-29 05:37:34.617 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5150MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:37:34 compute-0 nova_compute[254898]: 2025-11-29 05:37:34.618 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:37:34 compute-0 nova_compute[254898]: 2025-11-29 05:37:34.618 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:37:34 compute-0 nova_compute[254898]: 2025-11-29 05:37:34.677 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:37:34 compute-0 nova_compute[254898]: 2025-11-29 05:37:34.677 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:37:34 compute-0 nova_compute[254898]: 2025-11-29 05:37:34.690 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:37:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 29 05:37:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:37:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:37:34 compute-0 ceph-mon[75176]: osdmap e141: 3 total, 3 up, 3 in
Nov 29 05:37:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a9634de1-2230-40f8-a094-82f46777a70c", "format": "json"}]: dispatch
Nov 29 05:37:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a9634de1-2230-40f8-a094-82f46777a70c", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:34 compute-0 ceph-mon[75176]: pgmap v998: 305 pgs: 305 active+clean; 46 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 47 KiB/s wr, 6 op/s
Nov 29 05:37:34 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3236421173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:37:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 29 05:37:34 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 29 05:37:34 compute-0 podman[264476]: 2025-11-29 05:37:34.99123789 +0000 UTC m=+0.045842219 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 29 05:37:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:37:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639970689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:37:35 compute-0 nova_compute[254898]: 2025-11-29 05:37:35.135 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:37:35 compute-0 nova_compute[254898]: 2025-11-29 05:37:35.140 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:37:35 compute-0 nova_compute[254898]: 2025-11-29 05:37:35.156 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:37:35 compute-0 nova_compute[254898]: 2025-11-29 05:37:35.157 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:37:35 compute-0 nova_compute[254898]: 2025-11-29 05:37:35.158 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:37:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:35 compute-0 ceph-mon[75176]: osdmap e142: 3 total, 3 up, 3 in
Nov 29 05:37:35 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3639970689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:37:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "040becac-51dc-4867-bf68-cd9d237d5891", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 05:37:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/040becac-51dc-4867-bf68-cd9d237d5891/.meta.tmp'
Nov 29 05:37:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/040becac-51dc-4867-bf68-cd9d237d5891/.meta.tmp' to config b'/volumes/_nogroup/040becac-51dc-4867-bf68-cd9d237d5891/.meta'
Nov 29 05:37:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 05:37:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "040becac-51dc-4867-bf68-cd9d237d5891", "format": "json"}]: dispatch
Nov 29 05:37:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 05:37:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 05:37:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:37:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 96 KiB/s wr, 12 op/s
Nov 29 05:37:36 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "040becac-51dc-4867-bf68-cd9d237d5891", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:36 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "040becac-51dc-4867-bf68-cd9d237d5891", "format": "json"}]: dispatch
Nov 29 05:37:36 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:36 compute-0 ceph-mon[75176]: pgmap v1000: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 96 KiB/s wr, 12 op/s
Nov 29 05:37:37 compute-0 nova_compute[254898]: 2025-11-29 05:37:37.154 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:37:37 compute-0 nova_compute[254898]: 2025-11-29 05:37:37.155 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:37:37 compute-0 nova_compute[254898]: 2025-11-29 05:37:37.155 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:37:37 compute-0 nova_compute[254898]: 2025-11-29 05:37:37.155 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:37:37 compute-0 nova_compute[254898]: 2025-11-29 05:37:37.182 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:37:37 compute-0 nova_compute[254898]: 2025-11-29 05:37:37.182 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:37:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:37:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:37:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:37 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:37:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:37:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 8 op/s
Nov 29 05:37:38 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:37:38.337 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:37:38 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:37:38.338 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 05:37:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:37:38 compute-0 ceph-mon[75176]: pgmap v1001: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 46 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 8 op/s
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 99 KiB/s wr, 11 op/s
Nov 29 05:37:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 29 05:37:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 29 05:37:40 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "040becac-51dc-4867-bf68-cd9d237d5891", "format": "json"}]: dispatch
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:040becac-51dc-4867-bf68-cd9d237d5891, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:040becac-51dc-4867-bf68-cd9d237d5891, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:37:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:37:40.317+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '040becac-51dc-4867-bf68-cd9d237d5891' of type subvolume
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '040becac-51dc-4867-bf68-cd9d237d5891' of type subvolume
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "040becac-51dc-4867-bf68-cd9d237d5891", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/040becac-51dc-4867-bf68-cd9d237d5891'' moved to trashcan
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:040becac-51dc-4867-bf68-cd9d237d5891, vol_name:cephfs) < ""
Nov 29 05:37:40 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:37:40.340 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:37:40 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 05:37:40 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:37:40 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:37:40 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:37:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:41 compute-0 ceph-mon[75176]: pgmap v1002: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 99 KiB/s wr, 11 op/s
Nov 29 05:37:41 compute-0 ceph-mon[75176]: osdmap e143: 3 total, 3 up, 3 in
Nov 29 05:37:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:37:41 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:37:41
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.meta', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'backups', '.mgr']
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:37:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:37:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 99 KiB/s wr, 12 op/s
Nov 29 05:37:42 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "040becac-51dc-4867-bf68-cd9d237d5891", "format": "json"}]: dispatch
Nov 29 05:37:42 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "040becac-51dc-4867-bf68-cd9d237d5891", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:42 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:42 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:43 compute-0 ceph-mon[75176]: pgmap v1004: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 99 KiB/s wr, 12 op/s
Nov 29 05:37:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 05:37:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/201b4694-8935-45ce-9803-6d0546c82ba7/.meta.tmp'
Nov 29 05:37:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/201b4694-8935-45ce-9803-6d0546c82ba7/.meta.tmp' to config b'/volumes/_nogroup/201b4694-8935-45ce-9803-6d0546c82ba7/.meta'
Nov 29 05:37:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 05:37:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "format": "json"}]: dispatch
Nov 29 05:37:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 05:37:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 05:37:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:37:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 223 B/s rd, 29 KiB/s wr, 4 op/s
Nov 29 05:37:44 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:37:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:37:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:44 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:37:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:37:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "format": "json"}]: dispatch
Nov 29 05:37:45 compute-0 ceph-mon[75176]: pgmap v1005: 305 pgs: 305 active+clean; 47 MiB data, 217 MiB used, 60 GiB / 60 GiB avail; 223 B/s rd, 29 KiB/s wr, 4 op/s
Nov 29 05:37:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 05:37:46 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:37:47 compute-0 ceph-mon[75176]: pgmap v1006: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 05:37:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 05:37:48 compute-0 ceph-mon[75176]: pgmap v1007: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 05:37:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:37:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 05:37:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:37:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:37:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:37:48 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:37:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:37:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "format": "json"}]: dispatch
Nov 29 05:37:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:201b4694-8935-45ce-9803-6d0546c82ba7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:37:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:201b4694-8935-45ce-9803-6d0546c82ba7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:37:49 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:37:49.053+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '201b4694-8935-45ce-9803-6d0546c82ba7' of type subvolume
Nov 29 05:37:49 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '201b4694-8935-45ce-9803-6d0546c82ba7' of type subvolume
Nov 29 05:37:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 05:37:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/201b4694-8935-45ce-9803-6d0546c82ba7'' moved to trashcan
Nov 29 05:37:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:37:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:201b4694-8935-45ce-9803-6d0546c82ba7, vol_name:cephfs) < ""
Nov 29 05:37:49 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:37:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:37:49 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:37:49 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "format": "json"}]: dispatch
Nov 29 05:37:49 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "201b4694-8935-45ce-9803-6d0546c82ba7", "force": true, "format": "json"}]: dispatch
Nov 29 05:37:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Nov 29 05:37:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 29 05:37:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 29 05:37:50 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 29 05:37:50 compute-0 ceph-mon[75176]: pgmap v1008: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Nov 29 05:37:50 compute-0 ceph-mon[75176]: osdmap e144: 3 total, 3 up, 3 in
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.990100198606499e-05 of space, bias 4.0, pg target 0.11988120238327798 quantized to 16 (current 16)
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:37:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:37:51 compute-0 sudo[264500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:37:51 compute-0 sudo[264500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:51 compute-0 sudo[264500]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:51 compute-0 sudo[264531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:37:52 compute-0 sudo[264531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:52 compute-0 sudo[264531]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:52 compute-0 podman[264524]: 2025-11-29 05:37:52.004464351 +0000 UTC m=+0.052414630 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 05:37:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Nov 29 05:37:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:37:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:37:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:37:52 compute-0 sudo[264570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:37:52 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:37:52 compute-0 sudo[264570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:52 compute-0 sudo[264570]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:37:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:37:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:52 compute-0 sudo[264595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:37:52 compute-0 sudo[264595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:52 compute-0 sudo[264595]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:37:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:37:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:37:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:37:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:37:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:37:52 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 97667023-6f45-45d8-b348-6d48ceed01fb does not exist
Nov 29 05:37:52 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 9eab6a90-e4d1-4ca1-88dc-1e0213008c48 does not exist
Nov 29 05:37:52 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev c15888b5-6fda-416f-959f-36d48f2334fe does not exist
Nov 29 05:37:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:37:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:37:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:37:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:37:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:37:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:37:52 compute-0 sudo[264650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:37:52 compute-0 sudo[264650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:52 compute-0 sudo[264650]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:52 compute-0 sudo[264675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:37:52 compute-0 sudo[264675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:52 compute-0 sudo[264675]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:52 compute-0 sudo[264700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:37:52 compute-0 sudo[264700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:52 compute-0 sudo[264700]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:53 compute-0 sudo[264725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:37:53 compute-0 sudo[264725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:53 compute-0 ceph-mon[75176]: pgmap v1010: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Nov 29 05:37:53 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:37:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:37:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:37:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:37:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:37:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:37:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:37:53 compute-0 podman[264791]: 2025-11-29 05:37:53.362670043 +0000 UTC m=+0.040614163 container create 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:37:53 compute-0 systemd[1]: Started libpod-conmon-8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd.scope.
Nov 29 05:37:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:37:53 compute-0 podman[264791]: 2025-11-29 05:37:53.431193431 +0000 UTC m=+0.109137611 container init 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:37:53 compute-0 podman[264791]: 2025-11-29 05:37:53.437073843 +0000 UTC m=+0.115017963 container start 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:37:53 compute-0 podman[264791]: 2025-11-29 05:37:53.344429092 +0000 UTC m=+0.022373232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:37:53 compute-0 podman[264791]: 2025-11-29 05:37:53.440296051 +0000 UTC m=+0.118240171 container attach 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:37:53 compute-0 crazy_heisenberg[264807]: 167 167
Nov 29 05:37:53 compute-0 systemd[1]: libpod-8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd.scope: Deactivated successfully.
Nov 29 05:37:53 compute-0 podman[264791]: 2025-11-29 05:37:53.441835428 +0000 UTC m=+0.119779558 container died 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 05:37:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7e64b5d27bf0a6a0da54a347543d08bdc08ed6aad5a20396fe7fd15274f7482-merged.mount: Deactivated successfully.
Nov 29 05:37:53 compute-0 podman[264791]: 2025-11-29 05:37:53.477431989 +0000 UTC m=+0.155376109 container remove 8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_heisenberg, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 05:37:53 compute-0 systemd[1]: libpod-conmon-8de87ce0b70b43eea8c72dfc37e2f102fe088f7d25eb94786ed9fc46f0b645dd.scope: Deactivated successfully.
Nov 29 05:37:53 compute-0 podman[264832]: 2025-11-29 05:37:53.630659915 +0000 UTC m=+0.046519236 container create 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:37:53 compute-0 systemd[1]: Started libpod-conmon-627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a.scope.
Nov 29 05:37:53 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:53 compute-0 podman[264832]: 2025-11-29 05:37:53.606433659 +0000 UTC m=+0.022293020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:37:53 compute-0 podman[264832]: 2025-11-29 05:37:53.707654627 +0000 UTC m=+0.123513928 container init 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:37:53 compute-0 podman[264832]: 2025-11-29 05:37:53.714251157 +0000 UTC m=+0.130110448 container start 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:37:53 compute-0 podman[264832]: 2025-11-29 05:37:53.717615708 +0000 UTC m=+0.133475029 container attach 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:37:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Nov 29 05:37:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:37:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4811 writes, 21K keys, 4811 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4811 writes, 4811 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1490 writes, 6819 keys, 1490 commit groups, 1.0 writes per commit group, ingest: 9.58 MB, 0.02 MB/s
                                           Interval WAL: 1490 writes, 1490 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.8      0.23              0.10        12    0.019       0      0       0.0       0.0
                                             L6      1/0    7.38 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    149.4    122.4      0.63              0.30        11    0.058     48K   5786       0.0       0.0
                                            Sum      1/0    7.38 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    109.2    117.7      0.87              0.40        23    0.038     48K   5786       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1    118.1    119.5      0.38              0.18        10    0.038     23K   2592       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    149.4    122.4      0.63              0.30        11    0.058     48K   5786       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    105.6      0.23              0.10        11    0.021       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 0.9 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556a62a271f0#2 capacity: 304.00 MB usage: 8.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(557,8.30 MB,2.73181%) FilterBlock(24,141.61 KB,0.0454903%) IndexBlock(24,266.12 KB,0.0854894%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 05:37:54 compute-0 infallible_torvalds[264848]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:37:54 compute-0 infallible_torvalds[264848]: --> relative data size: 1.0
Nov 29 05:37:54 compute-0 infallible_torvalds[264848]: --> All data devices are unavailable
Nov 29 05:37:54 compute-0 systemd[1]: libpod-627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a.scope: Deactivated successfully.
Nov 29 05:37:54 compute-0 podman[264832]: 2025-11-29 05:37:54.669935893 +0000 UTC m=+1.085795194 container died 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:37:55 compute-0 ceph-mon[75176]: pgmap v1011: 305 pgs: 305 active+clean; 47 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 75 KiB/s wr, 8 op/s
Nov 29 05:37:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-93fb1a198ee6924e8a2c7ffbe63ef8003f42faf53a8d2a84960e7fd2dd4d1225-merged.mount: Deactivated successfully.
Nov 29 05:37:55 compute-0 podman[264832]: 2025-11-29 05:37:55.194813129 +0000 UTC m=+1.610672430 container remove 627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:37:55 compute-0 systemd[1]: libpod-conmon-627ef01dd827660106fa4e7b1543df526c562d0ac998bd904ebb4eea6c601a2a.scope: Deactivated successfully.
Nov 29 05:37:55 compute-0 sudo[264725]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:37:55 compute-0 sudo[264889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:37:55 compute-0 sudo[264889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:55 compute-0 sudo[264889]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/.meta.tmp'
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/.meta.tmp' to config b'/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/.meta'
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "format": "json"}]: dispatch
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:37:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:37:55 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:55 compute-0 sudo[264914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:37:55 compute-0 sudo[264914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:55 compute-0 sudo[264914]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:55 compute-0 sudo[264939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:37:55 compute-0 sudo[264939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:55 compute-0 sudo[264939]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:55 compute-0 sudo[264964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:37:55 compute-0 sudo[264964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:37:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:37:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 05:37:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:37:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:37:55 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:37:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:37:55 compute-0 podman[265029]: 2025-11-29 05:37:55.881389346 +0000 UTC m=+0.044830286 container create 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:37:55 compute-0 systemd[1]: Started libpod-conmon-5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b.scope.
Nov 29 05:37:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:37:55 compute-0 podman[265029]: 2025-11-29 05:37:55.859102487 +0000 UTC m=+0.022543437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:37:55 compute-0 podman[265029]: 2025-11-29 05:37:55.958198024 +0000 UTC m=+0.121638954 container init 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:37:55 compute-0 podman[265029]: 2025-11-29 05:37:55.969490387 +0000 UTC m=+0.132931307 container start 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:37:55 compute-0 podman[265029]: 2025-11-29 05:37:55.972868639 +0000 UTC m=+0.136309559 container attach 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:37:55 compute-0 confident_proskuriakova[265045]: 167 167
Nov 29 05:37:55 compute-0 systemd[1]: libpod-5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b.scope: Deactivated successfully.
Nov 29 05:37:55 compute-0 conmon[265045]: conmon 5d28d98700483e3e4a68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b.scope/container/memory.events
Nov 29 05:37:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 05:37:56 compute-0 podman[265050]: 2025-11-29 05:37:56.041720624 +0000 UTC m=+0.040926410 container died 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:37:56 compute-0 podman[265050]: 2025-11-29 05:37:56.081022815 +0000 UTC m=+0.080228581 container remove 5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:37:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-04f820e928d1b7ba203adc1d50e057a912f848c5f71f2794310e2e7a55c1884d-merged.mount: Deactivated successfully.
Nov 29 05:37:56 compute-0 systemd[1]: libpod-conmon-5d28d98700483e3e4a680ba6043d0ca49030265c5215718e7f79c248dd1b286b.scope: Deactivated successfully.
Nov 29 05:37:56 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:37:56 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:37:56 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:37:56 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:37:56 compute-0 podman[265065]: 2025-11-29 05:37:56.214179035 +0000 UTC m=+0.106848425 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:37:56 compute-0 podman[265098]: 2025-11-29 05:37:56.273851789 +0000 UTC m=+0.055206817 container create 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 29 05:37:56 compute-0 systemd[1]: Started libpod-conmon-09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0.scope.
Nov 29 05:37:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac7e1d2757a47d447f80c8fe846c47ea16393d42b4b4a2bffc3df7ee0052cc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac7e1d2757a47d447f80c8fe846c47ea16393d42b4b4a2bffc3df7ee0052cc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac7e1d2757a47d447f80c8fe846c47ea16393d42b4b4a2bffc3df7ee0052cc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac7e1d2757a47d447f80c8fe846c47ea16393d42b4b4a2bffc3df7ee0052cc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:56 compute-0 podman[265098]: 2025-11-29 05:37:56.257868852 +0000 UTC m=+0.039223860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:37:56 compute-0 podman[265098]: 2025-11-29 05:37:56.353370263 +0000 UTC m=+0.134725341 container init 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 05:37:56 compute-0 podman[265098]: 2025-11-29 05:37:56.362524954 +0000 UTC m=+0.143879942 container start 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 05:37:56 compute-0 podman[265098]: 2025-11-29 05:37:56.36567552 +0000 UTC m=+0.147030538 container attach 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 05:37:57 compute-0 agitated_haibt[265115]: {
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:     "0": [
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:         {
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "devices": [
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "/dev/loop3"
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             ],
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_name": "ceph_lv0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_size": "21470642176",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "name": "ceph_lv0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "tags": {
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.cluster_name": "ceph",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.crush_device_class": "",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.encrypted": "0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.osd_id": "0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.type": "block",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.vdo": "0"
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             },
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "type": "block",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "vg_name": "ceph_vg0"
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:         }
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:     ],
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:     "1": [
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:         {
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "devices": [
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "/dev/loop4"
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             ],
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_name": "ceph_lv1",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_size": "21470642176",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "name": "ceph_lv1",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "tags": {
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.cluster_name": "ceph",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.crush_device_class": "",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.encrypted": "0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.osd_id": "1",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.type": "block",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.vdo": "0"
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             },
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "type": "block",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "vg_name": "ceph_vg1"
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:         }
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:     ],
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:     "2": [
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:         {
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "devices": [
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "/dev/loop5"
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             ],
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_name": "ceph_lv2",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_size": "21470642176",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "name": "ceph_lv2",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "tags": {
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.cluster_name": "ceph",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.crush_device_class": "",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.encrypted": "0",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.osd_id": "2",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.type": "block",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:                 "ceph.vdo": "0"
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             },
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "type": "block",
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:             "vg_name": "ceph_vg2"
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:         }
Nov 29 05:37:57 compute-0 agitated_haibt[265115]:     ]
Nov 29 05:37:57 compute-0 agitated_haibt[265115]: }
Nov 29 05:37:57 compute-0 systemd[1]: libpod-09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0.scope: Deactivated successfully.
Nov 29 05:37:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:37:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "format": "json"}]: dispatch
Nov 29 05:37:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:37:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:37:57 compute-0 ceph-mon[75176]: pgmap v1012: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 05:37:57 compute-0 podman[265124]: 2025-11-29 05:37:57.172788593 +0000 UTC m=+0.037013067 container died 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ac7e1d2757a47d447f80c8fe846c47ea16393d42b4b4a2bffc3df7ee0052cc9-merged.mount: Deactivated successfully.
Nov 29 05:37:57 compute-0 podman[265124]: 2025-11-29 05:37:57.244083977 +0000 UTC m=+0.108308361 container remove 09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 05:37:57 compute-0 systemd[1]: libpod-conmon-09ca70ed900a40192a12f025dffd9035131241a96c70d76364a57c43c243dac0.scope: Deactivated successfully.
Nov 29 05:37:57 compute-0 sudo[264964]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:57 compute-0 sudo[265138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:37:57 compute-0 sudo[265138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:57 compute-0 sudo[265138]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:57 compute-0 sudo[265163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:37:57 compute-0 sudo[265163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:57 compute-0 sudo[265163]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:57 compute-0 sudo[265188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:37:57 compute-0 sudo[265188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:57 compute-0 sudo[265188]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:57 compute-0 sudo[265213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:37:57 compute-0 sudo[265213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:57 compute-0 podman[265279]: 2025-11-29 05:37:57.884552209 +0000 UTC m=+0.044087947 container create 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 05:37:57 compute-0 systemd[1]: Started libpod-conmon-7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58.scope.
Nov 29 05:37:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:37:57 compute-0 podman[265279]: 2025-11-29 05:37:57.862814784 +0000 UTC m=+0.022350612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:37:57 compute-0 podman[265279]: 2025-11-29 05:37:57.964690537 +0000 UTC m=+0.124226295 container init 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:37:57 compute-0 podman[265279]: 2025-11-29 05:37:57.97101385 +0000 UTC m=+0.130549588 container start 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:37:57 compute-0 podman[265279]: 2025-11-29 05:37:57.973737706 +0000 UTC m=+0.133273444 container attach 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 05:37:57 compute-0 cool_mirzakhani[265296]: 167 167
Nov 29 05:37:57 compute-0 systemd[1]: libpod-7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58.scope: Deactivated successfully.
Nov 29 05:37:57 compute-0 podman[265279]: 2025-11-29 05:37:57.975699573 +0000 UTC m=+0.135235311 container died 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 05:37:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e43a2465f2d330d79a34442fb31d1b665f8f96b3ee3334fcd922340271fc0ded-merged.mount: Deactivated successfully.
Nov 29 05:37:58 compute-0 podman[265279]: 2025-11-29 05:37:58.006464708 +0000 UTC m=+0.166000446 container remove 7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mirzakhani, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 05:37:58 compute-0 systemd[1]: libpod-conmon-7ef2afbf35cac18f3a7cc774e0f757b8a4f481564070c60860a683d948526b58.scope: Deactivated successfully.
Nov 29 05:37:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 05:37:58 compute-0 podman[265320]: 2025-11-29 05:37:58.180205329 +0000 UTC m=+0.037631981 container create 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 05:37:58 compute-0 systemd[1]: Started libpod-conmon-95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c.scope.
Nov 29 05:37:58 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e7d1d7b18a986ee29972eb470fa40fef76d4fca33357fec0e97511f501ec5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e7d1d7b18a986ee29972eb470fa40fef76d4fca33357fec0e97511f501ec5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e7d1d7b18a986ee29972eb470fa40fef76d4fca33357fec0e97511f501ec5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c17e7d1d7b18a986ee29972eb470fa40fef76d4fca33357fec0e97511f501ec5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:37:58 compute-0 podman[265320]: 2025-11-29 05:37:58.163246989 +0000 UTC m=+0.020673661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:37:58 compute-0 podman[265320]: 2025-11-29 05:37:58.260763479 +0000 UTC m=+0.118190161 container init 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 05:37:58 compute-0 podman[265320]: 2025-11-29 05:37:58.266046147 +0000 UTC m=+0.123472789 container start 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 29 05:37:58 compute-0 podman[265320]: 2025-11-29 05:37:58.268781173 +0000 UTC m=+0.126207825 container attach 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:37:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve49", "tenant_id": "e577c04bfe1b459f9aebd0f826827833", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:37:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 05:37:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) v1
Nov 29 05:37:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 29 05:37:58 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID eve49 with tenant e577c04bfe1b459f9aebd0f826827833
Nov 29 05:37:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:37:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 05:37:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:37:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:37:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:37:59 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:37:59 compute-0 ceph-mon[75176]: pgmap v1013: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 45 KiB/s wr, 5 op/s
Nov 29 05:37:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 29 05:37:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:37:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:37:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:37:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:37:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:37:59 compute-0 focused_feistel[265337]: {
Nov 29 05:37:59 compute-0 focused_feistel[265337]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "osd_id": 0,
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "type": "bluestore"
Nov 29 05:37:59 compute-0 focused_feistel[265337]:     },
Nov 29 05:37:59 compute-0 focused_feistel[265337]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "osd_id": 1,
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "type": "bluestore"
Nov 29 05:37:59 compute-0 focused_feistel[265337]:     },
Nov 29 05:37:59 compute-0 focused_feistel[265337]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "osd_id": 2,
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:37:59 compute-0 focused_feistel[265337]:         "type": "bluestore"
Nov 29 05:37:59 compute-0 focused_feistel[265337]:     }
Nov 29 05:37:59 compute-0 focused_feistel[265337]: }
Nov 29 05:37:59 compute-0 systemd[1]: libpod-95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c.scope: Deactivated successfully.
Nov 29 05:37:59 compute-0 podman[265320]: 2025-11-29 05:37:59.258116203 +0000 UTC m=+1.115542855 container died 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 05:37:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c17e7d1d7b18a986ee29972eb470fa40fef76d4fca33357fec0e97511f501ec5-merged.mount: Deactivated successfully.
Nov 29 05:37:59 compute-0 podman[265320]: 2025-11-29 05:37:59.302521907 +0000 UTC m=+1.159948559 container remove 95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:37:59 compute-0 sudo[265213]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:37:59 compute-0 systemd[1]: libpod-conmon-95eefcba26a9b9039d6c98ccce94f9d0109207207176ffef5544b94c3c503f4c.scope: Deactivated successfully.
Nov 29 05:37:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:37:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:37:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:37:59 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a10508ad-79c0-41f2-9fdf-df00e4c59927 does not exist
Nov 29 05:37:59 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 8b4ea74e-bd67-4c06-afc9-dc77352aadeb does not exist
Nov 29 05:37:59 compute-0 sudo[265382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:37:59 compute-0 sudo[265382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:59 compute-0 sudo[265382]: pam_unix(sudo:session): session closed for user root
Nov 29 05:37:59 compute-0 sudo[265407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:37:59 compute-0 sudo[265407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:37:59 compute-0 sudo[265407]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 75 KiB/s wr, 9 op/s
Nov 29 05:38:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve49", "tenant_id": "e577c04bfe1b459f9aebd0f826827833", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:38:00 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:00 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:00 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:38:00 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:38:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:01 compute-0 ceph-mon[75176]: pgmap v1014: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 75 KiB/s wr, 9 op/s
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 173 B/s rd, 64 KiB/s wr, 7 op/s
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve48", "tenant_id": "e577c04bfe1b459f9aebd0f826827833", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 05:38:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) v1
Nov 29 05:38:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 29 05:38:02 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID eve48 with tenant e577c04bfe1b459f9aebd0f826827833
Nov 29 05:38:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:38:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:38:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:38:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 05:38:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:38:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:38:02 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:03 compute-0 ceph-mon[75176]: pgmap v1015: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 173 B/s rd, 64 KiB/s wr, 7 op/s
Nov 29 05:38:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 29 05:38:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:38:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:38:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:38:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 63 KiB/s wr, 7 op/s
Nov 29 05:38:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve48", "tenant_id": "e577c04bfe1b459f9aebd0f826827833", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:05 compute-0 ceph-mon[75176]: pgmap v1016: 305 pgs: 305 active+clean; 48 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 63 KiB/s wr, 7 op/s
Nov 29 05:38:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 91 KiB/s wr, 11 op/s
Nov 29 05:38:06 compute-0 podman[265433]: 2025-11-29 05:38:06.044699597 +0000 UTC m=+0.098037902 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:38:06 compute-0 ceph-mon[75176]: pgmap v1017: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 91 KiB/s wr, 11 op/s
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:38:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:06 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:38:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:38:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve48", "format": "json"}]: dispatch
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) v1
Nov 29 05:38:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 29 05:38:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) v1
Nov 29 05:38:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve48"}]: dispatch
Nov 29 05:38:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve48", "format": "json"}]: dispatch
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0
Nov 29 05:38:06 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0],prefix=session evict} (starting...)
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/79586ddb-9940-4101-a183-8795d6ac1e84/.meta.tmp'
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/79586ddb-9940-4101-a183-8795d6ac1e84/.meta.tmp' to config b'/volumes/_nogroup/79586ddb-9940-4101-a183-8795d6ac1e84/.meta'
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "format": "json"}]: dispatch
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 05:38:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 05:38:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:38:07 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve48", "format": "json"}]: dispatch
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve48", "format": "json"}]: dispatch
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve48"}]: dispatch
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve48", "format": "json"}]: dispatch
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "format": "json"}]: dispatch
Nov 29 05:38:07 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 70 KiB/s wr, 8 op/s
Nov 29 05:38:08 compute-0 ceph-mon[75176]: pgmap v1018: 305 pgs: 305 active+clean; 48 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 70 KiB/s wr, 8 op/s
Nov 29 05:38:09 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve47", "tenant_id": "e577c04bfe1b459f9aebd0f826827833", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 05:38:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) v1
Nov 29 05:38:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 29 05:38:09 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID eve47 with tenant e577c04bfe1b459f9aebd0f826827833
Nov 29 05:38:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 104 KiB/s wr, 12 op/s
Nov 29 05:38:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 29 05:38:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:38:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, tenant_id:e577c04bfe1b459f9aebd0f826827833, vol_name:cephfs) < ""
Nov 29 05:38:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:38:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 05:38:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:38:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:38:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:38:10 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:38:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:11 compute-0 sshd-session[265453]: Invalid user pi from 218.157.163.203 port 57141
Nov 29 05:38:11 compute-0 sshd-session[265455]: Invalid user pi from 218.157.163.203 port 57228
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 05:38:11 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve47", "tenant_id": "e577c04bfe1b459f9aebd0f826827833", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:11 compute-0 ceph-mon[75176]: pgmap v1019: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 104 KiB/s wr, 12 op/s
Nov 29 05:38:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_0065e446-d05c-42f4-b14d-c32152b4c886", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:38:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/faeaf227-675c-42df-9bf7-248fca8b7753/.meta.tmp'
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/faeaf227-675c-42df-9bf7-248fca8b7753/.meta.tmp' to config b'/volumes/_nogroup/faeaf227-675c-42df-9bf7-248fca8b7753/.meta'
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "format": "json"}]: dispatch
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 05:38:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:38:11 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:11 compute-0 sshd-session[265453]: Connection closed by invalid user pi 218.157.163.203 port 57141 [preauth]
Nov 29 05:38:11 compute-0 sshd-session[265455]: Connection closed by invalid user pi 218.157.163.203 port 57228 [preauth]
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:38:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:38:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 8 op/s
Nov 29 05:38:12 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:12 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:12 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:12 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:13 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:38:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:38:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:13 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:38:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:38:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:13 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "format": "json"}]: dispatch
Nov 29 05:38:13 compute-0 ceph-mon[75176]: pgmap v1020: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 8 op/s
Nov 29 05:38:13 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:13 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:13 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:38:13.753 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:38:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:38:13.753 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:38:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:38:13.753 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:38:13 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve47", "format": "json"}]: dispatch
Nov 29 05:38:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) v1
Nov 29 05:38:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 29 05:38:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) v1
Nov 29 05:38:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve47"}]: dispatch
Nov 29 05:38:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Nov 29 05:38:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:13 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve47", "format": "json"}]: dispatch
Nov 29 05:38:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0
Nov 29 05:38:13 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0],prefix=session evict} (starting...)
Nov 29 05:38:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 8 op/s
Nov 29 05:38:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:38:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1525944531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:38:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:38:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1525944531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:38:14 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:38:14 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve47", "format": "json"}]: dispatch
Nov 29 05:38:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve47", "format": "json"}]: dispatch
Nov 29 05:38:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve47"}]: dispatch
Nov 29 05:38:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Nov 29 05:38:14 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve47", "format": "json"}]: dispatch
Nov 29 05:38:14 compute-0 ceph-mon[75176]: pgmap v1021: 305 pgs: 305 active+clean; 49 MiB data, 219 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 61 KiB/s wr, 8 op/s
Nov 29 05:38:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1525944531' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:38:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1525944531' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:38:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "format": "json"}]: dispatch
Nov 29 05:38:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:faeaf227-675c-42df-9bf7-248fca8b7753, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:faeaf227-675c-42df-9bf7-248fca8b7753, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:14 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:14.626+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'faeaf227-675c-42df-9bf7-248fca8b7753' of type subvolume
Nov 29 05:38:14 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'faeaf227-675c-42df-9bf7-248fca8b7753' of type subvolume
Nov 29 05:38:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 05:38:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/faeaf227-675c-42df-9bf7-248fca8b7753'' moved to trashcan
Nov 29 05:38:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:38:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:faeaf227-675c-42df-9bf7-248fca8b7753, vol_name:cephfs) < ""
Nov 29 05:38:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:15 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "format": "json"}]: dispatch
Nov 29 05:38:15 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "faeaf227-675c-42df-9bf7-248fca8b7753", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 124 KiB/s wr, 15 op/s
Nov 29 05:38:16 compute-0 ceph-mon[75176]: pgmap v1022: 305 pgs: 305 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 124 KiB/s wr, 15 op/s
Nov 29 05:38:16 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:16 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:38:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 05:38:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:38:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:38:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:38:17 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:38:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:17 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:38:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:38:17 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 96 KiB/s wr, 11 op/s
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "format": "json"}]: dispatch
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:79586ddb-9940-4101-a183-8795d6ac1e84, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:79586ddb-9940-4101-a183-8795d6ac1e84, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:18 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:18.229+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '79586ddb-9940-4101-a183-8795d6ac1e84' of type subvolume
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '79586ddb-9940-4101-a183-8795d6ac1e84' of type subvolume
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/79586ddb-9940-4101-a183-8795d6ac1e84'' moved to trashcan
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:79586ddb-9940-4101-a183-8795d6ac1e84, vol_name:cephfs) < ""
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve49", "format": "json"}]: dispatch
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) v1
Nov 29 05:38:18 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 29 05:38:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) v1
Nov 29 05:38:18 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve49"}]: dispatch
Nov 29 05:38:18 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve49", "format": "json"}]: dispatch
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0
Nov 29 05:38:18 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886/9abefe74-5963-497e-9f27-35334ccad4d0],prefix=session evict} (starting...)
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "format": "json"}]: dispatch
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0065e446-d05c-42f4-b14d-c32152b4c886, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0065e446-d05c-42f4-b14d-c32152b4c886, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:18 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:18.521+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0065e446-d05c-42f4-b14d-c32152b4c886' of type subvolume
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0065e446-d05c-42f4-b14d-c32152b4c886' of type subvolume
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0065e446-d05c-42f4-b14d-c32152b4c886'' moved to trashcan
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:38:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0065e446-d05c-42f4-b14d-c32152b4c886, vol_name:cephfs) < ""
Nov 29 05:38:19 compute-0 ceph-mon[75176]: pgmap v1023: 305 pgs: 305 active+clean; 49 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 96 KiB/s wr, 11 op/s
Nov 29 05:38:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.eve49", "format": "json"}]: dispatch
Nov 29 05:38:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.eve49"}]: dispatch
Nov 29 05:38:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Nov 29 05:38:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 138 KiB/s wr, 15 op/s
Nov 29 05:38:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:20 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "format": "json"}]: dispatch
Nov 29 05:38:20 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "79586ddb-9940-4101-a183-8795d6ac1e84", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:20 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve49", "format": "json"}]: dispatch
Nov 29 05:38:20 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "auth_id": "eve49", "format": "json"}]: dispatch
Nov 29 05:38:20 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "format": "json"}]: dispatch
Nov 29 05:38:20 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0065e446-d05c-42f4-b14d-c32152b4c886", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:20 compute-0 ceph-mon[75176]: pgmap v1024: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 138 KiB/s wr, 15 op/s
Nov 29 05:38:20 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:20 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:38:20 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:20 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:38:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:38:20 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:20 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:20 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:21 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:21 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:21 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:21 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 105 KiB/s wr, 12 op/s
Nov 29 05:38:22 compute-0 ceph-mon[75176]: pgmap v1025: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 105 KiB/s wr, 12 op/s
Nov 29 05:38:22 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "a00c7ebd-01d8-4358-9f97-04e4aa820623", "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:a00c7ebd-01d8-4358-9f97-04e4aa820623, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 29 05:38:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:a00c7ebd-01d8-4358-9f97-04e4aa820623, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 29 05:38:23 compute-0 podman[265461]: 2025-11-29 05:38:23.054317038 +0000 UTC m=+0.090822288 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 05:38:23 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "a00c7ebd-01d8-4358-9f97-04e4aa820623", "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 105 KiB/s wr, 12 op/s
Nov 29 05:38:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:24 compute-0 ceph-mon[75176]: pgmap v1026: 305 pgs: 305 active+clean; 50 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 105 KiB/s wr, 12 op/s
Nov 29 05:38:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:38:24 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 05:38:24 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:38:24 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:38:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:38:24 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:38:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:25 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:38:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:38:25 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 50 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 136 KiB/s wr, 16 op/s
Nov 29 05:38:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "a00c7ebd-01d8-4358-9f97-04e4aa820623", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a00c7ebd-01d8-4358-9f97-04e4aa820623, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 29 05:38:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a00c7ebd-01d8-4358-9f97-04e4aa820623, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 29 05:38:26 compute-0 ceph-mon[75176]: pgmap v1027: 305 pgs: 305 active+clean; 50 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 136 KiB/s wr, 16 op/s
Nov 29 05:38:26 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "a00c7ebd-01d8-4358-9f97-04e4aa820623", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "e90efdb1-518e-4a19-a290-0fbf105b6f6d", "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:e90efdb1-518e-4a19-a290-0fbf105b6f6d, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 29 05:38:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:e90efdb1-518e-4a19-a290-0fbf105b6f6d, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Nov 29 05:38:27 compute-0 podman[265482]: 2025-11-29 05:38:27.046922101 +0000 UTC m=+0.099468717 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "e90efdb1-518e-4a19-a290-0fbf105b6f6d", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e90efdb1-518e-4a19-a290-0fbf105b6f6d, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:e90efdb1-518e-4a19-a290-0fbf105b6f6d, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Nov 29 05:38:27 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "e90efdb1-518e-4a19-a290-0fbf105b6f6d", "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:27 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "e90efdb1-518e-4a19-a290-0fbf105b6f6d", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d9d275b-9d0b-4256-9071-300779a207f4/.meta.tmp'
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d9d275b-9d0b-4256-9071-300779a207f4/.meta.tmp' to config b'/volumes/_nogroup/1d9d275b-9d0b-4256-9071-300779a207f4/.meta'
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "format": "json"}]: dispatch
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 05:38:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:38:27 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:38:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:27 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:38:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:38:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 50 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 73 KiB/s wr, 9 op/s
Nov 29 05:38:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 05:38:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6c79e8b2-8385-4693-a845-3fe4aa3849bb/.meta.tmp'
Nov 29 05:38:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6c79e8b2-8385-4693-a845-3fe4aa3849bb/.meta.tmp' to config b'/volumes/_nogroup/6c79e8b2-8385-4693-a845-3fe4aa3849bb/.meta'
Nov 29 05:38:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 05:38:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "format": "json"}]: dispatch
Nov 29 05:38:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 05:38:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 05:38:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:38:28 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:28 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:28 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "format": "json"}]: dispatch
Nov 29 05:38:28 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:28 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:38:28 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:28 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:28 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:28 compute-0 ceph-mon[75176]: pgmap v1028: 305 pgs: 305 active+clean; 50 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 73 KiB/s wr, 9 op/s
Nov 29 05:38:28 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:28 compute-0 nova_compute[254898]: 2025-11-29 05:38:28.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:28 compute-0 nova_compute[254898]: 2025-11-29 05:38:28.969 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:28 compute-0 nova_compute[254898]: 2025-11-29 05:38:28.969 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 05:38:29 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:29 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "format": "json"}]: dispatch
Nov 29 05:38:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 105 KiB/s wr, 12 op/s
Nov 29 05:38:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:30 compute-0 nova_compute[254898]: 2025-11-29 05:38:30.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:30 compute-0 nova_compute[254898]: 2025-11-29 05:38:30.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:31 compute-0 ceph-mon[75176]: pgmap v1029: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 105 KiB/s wr, 12 op/s
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "format": "json"}]: dispatch
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1d9d275b-9d0b-4256-9071-300779a207f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1d9d275b-9d0b-4256-9071-300779a207f4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:31 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:31.742+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d9d275b-9d0b-4256-9071-300779a207f4' of type subvolume
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d9d275b-9d0b-4256-9071-300779a207f4' of type subvolume
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1d9d275b-9d0b-4256-9071-300779a207f4'' moved to trashcan
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d9d275b-9d0b-4256-9071-300779a207f4, vol_name:cephfs) < ""
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:38:31 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 05:38:31 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:38:31 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:38:31 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 8 op/s
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6565f80a-02f8-4a73-b996-bca74f45d589/.meta.tmp'
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6565f80a-02f8-4a73-b996-bca74f45d589/.meta.tmp' to config b'/volumes/_nogroup/6565f80a-02f8-4a73-b996-bca74f45d589/.meta'
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 05:38:32 compute-0 nova_compute[254898]: 2025-11-29 05:38:32.298 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "format": "json"}]: dispatch
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 05:38:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:38:32 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:32 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "format": "json"}]: dispatch
Nov 29 05:38:32 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1d9d275b-9d0b-4256-9071-300779a207f4", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:32 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:38:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:38:32 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:38:32 compute-0 ceph-mon[75176]: pgmap v1030: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 8 op/s
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/.meta.tmp'
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/.meta.tmp' to config b'/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/.meta'
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "format": "json"}]: dispatch
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 05:38:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 05:38:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:38:32 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:32 compute-0 nova_compute[254898]: 2025-11-29 05:38:32.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:32 compute-0 nova_compute[254898]: 2025-11-29 05:38:32.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "format": "json"}]: dispatch
Nov 29 05:38:33 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "format": "json"}]: dispatch
Nov 29 05:38:33 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:33 compute-0 nova_compute[254898]: 2025-11-29 05:38:33.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:33 compute-0 nova_compute[254898]: 2025-11-29 05:38:33.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:38:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 8 op/s
Nov 29 05:38:34 compute-0 ceph-mon[75176]: pgmap v1031: 305 pgs: 305 active+clean; 51 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 63 KiB/s wr, 8 op/s
Nov 29 05:38:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:35 compute-0 nova_compute[254898]: 2025-11-29 05:38:35.254 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:38:35 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:38:35 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:38:35 compute-0 nova_compute[254898]: 2025-11-29 05:38:35.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:36 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 51 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 100 KiB/s wr, 12 op/s
Nov 29 05:38:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:38:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.065 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.065 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.066 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.066 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.067 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "format": "json"}]: dispatch
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6565f80a-02f8-4a73-b996-bca74f45d589, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6565f80a-02f8-4a73-b996-bca74f45d589, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:36 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:36.323+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6565f80a-02f8-4a73-b996-bca74f45d589' of type subvolume
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6565f80a-02f8-4a73-b996-bca74f45d589' of type subvolume
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6565f80a-02f8-4a73-b996-bca74f45d589'' moved to trashcan
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6565f80a-02f8-4a73-b996-bca74f45d589, vol_name:cephfs) < ""
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0940c3a8-0a26-4b45-8cd5-2278d86a8159/.meta.tmp'
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0940c3a8-0a26-4b45-8cd5-2278d86a8159/.meta.tmp' to config b'/volumes/_nogroup/0940c3a8-0a26-4b45-8cd5-2278d86a8159/.meta'
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "format": "json"}]: dispatch
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 05:38:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:38:36 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:38:36 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/928314478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.494 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.658 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.659 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.660 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.660 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "auth_id": "tempest-cephx-id-2083182201", "tenant_id": "ae2a6e9fbea0426ebacf2fe56abb903e", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume authorize, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, tenant_id:ae2a6e9fbea0426ebacf2fe56abb903e, vol_name:cephfs) < ""
Nov 29 05:38:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"} v 0) v1
Nov 29 05:38:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 05:38:36 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-2083182201 with tenant ae2a6e9fbea0426ebacf2fe56abb903e
Nov 29 05:38:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2083182201", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c5ccc350-84d0-463a-8142-2450838c9e41", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:38:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2083182201", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c5ccc350-84d0-463a-8142-2450838c9e41", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2083182201", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c5ccc350-84d0-463a-8142-2450838c9e41", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume authorize, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, tenant_id:ae2a6e9fbea0426ebacf2fe56abb903e, vol_name:cephfs) < ""
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.976 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:38:36 compute-0 nova_compute[254898]: 2025-11-29 05:38:36.976 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:38:36 compute-0 podman[265532]: 2025-11-29 05:38:36.985019475 +0000 UTC m=+0.042255353 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:38:37 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mon[75176]: pgmap v1032: 305 pgs: 305 active+clean; 51 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 100 KiB/s wr, 12 op/s
Nov 29 05:38:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:37 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/928314478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2083182201", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c5ccc350-84d0-463a-8142-2450838c9e41", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2083182201", "caps": ["mds", "allow rw path=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c5ccc350-84d0-463a-8142-2450838c9e41", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.145 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing inventories for resource provider 59594bc8-0143-475b-913f-cbe106b48966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.214 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating ProviderTree inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.214 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.231 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing aggregate associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.249 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing trait associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.262 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:38:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:38:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907328920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "auth_id": "tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume deauthorize, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.675 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.681 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.695 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.697 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.697 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.698 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.698 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 05:38:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"} v 0) v1
Nov 29 05:38:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2083182201"} v 0) v1
Nov 29 05:38:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2083182201"}]: dispatch
Nov 29 05:38:37 compute-0 nova_compute[254898]: 2025-11-29 05:38:37.710 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 05:38:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2083182201"}]': finished
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume deauthorize, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "auth_id": "tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume evict, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2083182201, client_metadata.root=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3
Nov 29 05:38:37 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-2083182201,client_metadata.root=/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41/9f7bf8da-4ed9-41a3-8ae3-ce8081d668f3],prefix=session evict} (starting...)
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2083182201, format:json, prefix:fs subvolume evict, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c5ccc350-84d0-463a-8142-2450838c9e41, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c5ccc350-84d0-463a-8142-2450838c9e41, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:37 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:37.860+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5ccc350-84d0-463a-8142-2450838c9e41' of type subvolume
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c5ccc350-84d0-463a-8142-2450838c9e41' of type subvolume
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c5ccc350-84d0-463a-8142-2450838c9e41'' moved to trashcan
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:38:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c5ccc350-84d0-463a-8142-2450838c9e41, vol_name:cephfs) < ""
Nov 29 05:38:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 51 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 69 KiB/s wr, 7 op/s
Nov 29 05:38:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "format": "json"}]: dispatch
Nov 29 05:38:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6565f80a-02f8-4a73-b996-bca74f45d589", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "format": "json"}]: dispatch
Nov 29 05:38:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "auth_id": "tempest-cephx-id-2083182201", "tenant_id": "ae2a6e9fbea0426ebacf2fe56abb903e", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:38 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1907328920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:38:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 05:38:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2083182201"}]: dispatch
Nov 29 05:38:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2083182201"}]': finished
Nov 29 05:38:38 compute-0 nova_compute[254898]: 2025-11-29 05:38:38.711 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:38 compute-0 nova_compute[254898]: 2025-11-29 05:38:38.711 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:38 compute-0 nova_compute[254898]: 2025-11-29 05:38:38.711 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:38:38 compute-0 nova_compute[254898]: 2025-11-29 05:38:38.712 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:38:38 compute-0 nova_compute[254898]: 2025-11-29 05:38:38.730 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:38:38 compute-0 nova_compute[254898]: 2025-11-29 05:38:38.730 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:38:39 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "auth_id": "tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "auth_id": "tempest-cephx-id-2083182201", "format": "json"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "format": "json"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c5ccc350-84d0-463a-8142-2450838c9e41", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mon[75176]: pgmap v1033: 305 pgs: 305 active+clean; 51 MiB data, 222 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 69 KiB/s wr, 7 op/s
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:38:39 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 05:38:39 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:38:39 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "format": "json"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:39.327+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c79e8b2-8385-4693-a845-3fe4aa3849bb' of type subvolume
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6c79e8b2-8385-4693-a845-3fe4aa3849bb' of type subvolume
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6c79e8b2-8385-4693-a845-3fe4aa3849bb'' moved to trashcan
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6c79e8b2-8385-4693-a845-3fe4aa3849bb, vol_name:cephfs) < ""
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "format": "json"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0940c3a8-0a26-4b45-8cd5-2278d86a8159' of type subvolume
Nov 29 05:38:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:39.446+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0940c3a8-0a26-4b45-8cd5-2278d86a8159' of type subvolume
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0940c3a8-0a26-4b45-8cd5-2278d86a8159'' moved to trashcan
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:38:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0940c3a8-0a26-4b45-8cd5-2278d86a8159, vol_name:cephfs) < ""
Nov 29 05:38:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 52 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 136 KiB/s wr, 14 op/s
Nov 29 05:38:40 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:40 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:38:40 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:38:40 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:38:40 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:40 compute-0 sshd-session[265574]: Invalid user frappe from 45.120.216.232 port 34354
Nov 29 05:38:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:40 compute-0 sshd-session[265574]: Received disconnect from 45.120.216.232 port 34354:11: Bye Bye [preauth]
Nov 29 05:38:40 compute-0 sshd-session[265574]: Disconnected from invalid user frappe 45.120.216.232 port 34354 [preauth]
Nov 29 05:38:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "format": "json"}]: dispatch
Nov 29 05:38:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6c79e8b2-8385-4693-a845-3fe4aa3849bb", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "format": "json"}]: dispatch
Nov 29 05:38:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0940c3a8-0a26-4b45-8cd5-2278d86a8159", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:41 compute-0 ceph-mon[75176]: pgmap v1034: 305 pgs: 305 active+clean; 52 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 136 KiB/s wr, 14 op/s
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:38:41
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', '.rgw.root', 'backups', 'vms', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:38:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 52 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 104 KiB/s wr, 11 op/s
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1460327761
Nov 29 05:38:42 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:38:42.475 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:38:42 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:38:42.477 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:38:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:38:42 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:38:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:38:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2efbb8e6-d3d3-430b-8165-2af4490ffea0/.meta.tmp'
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2efbb8e6-d3d3-430b-8165-2af4490ffea0/.meta.tmp' to config b'/volumes/_nogroup/2efbb8e6-d3d3-430b-8165-2af4490ffea0/.meta'
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 05:38:42 compute-0 sshd-session[265577]: Received disconnect from 152.32.145.111 port 39050:11: Bye Bye [preauth]
Nov 29 05:38:42 compute-0 sshd-session[265577]: Disconnected from authenticating user root 152.32.145.111 port 39050 [preauth]
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "format": "json"}]: dispatch
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 05:38:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 05:38:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:38:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:43 compute-0 ceph-mon[75176]: pgmap v1035: 305 pgs: 305 active+clean; 52 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 104 KiB/s wr, 11 op/s
Nov 29 05:38:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:38:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:43 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 52 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 103 KiB/s wr, 11 op/s
Nov 29 05:38:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:38:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "format": "json"}]: dispatch
Nov 29 05:38:45 compute-0 ceph-mon[75176]: pgmap v1036: 305 pgs: 305 active+clean; 52 MiB data, 226 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 103 KiB/s wr, 11 op/s
Nov 29 05:38:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 133 KiB/s wr, 15 op/s
Nov 29 05:38:46 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:46 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:38:46 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:38:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 05:38:46 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:38:46 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:38:46 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:46 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:46 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:46 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:38:46 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:38:46 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:46 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "format": "json"}]: dispatch
Nov 29 05:38:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:38:47 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:38:47.144+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2efbb8e6-d3d3-430b-8165-2af4490ffea0' of type subvolume
Nov 29 05:38:47 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2efbb8e6-d3d3-430b-8165-2af4490ffea0' of type subvolume
Nov 29 05:38:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:47 compute-0 ceph-mon[75176]: pgmap v1037: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 133 KiB/s wr, 15 op/s
Nov 29 05:38:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:38:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:38:47 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:38:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 05:38:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2efbb8e6-d3d3-430b-8165-2af4490ffea0'' moved to trashcan
Nov 29 05:38:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:38:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2efbb8e6-d3d3-430b-8165-2af4490ffea0, vol_name:cephfs) < ""
Nov 29 05:38:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 96 KiB/s wr, 11 op/s
Nov 29 05:38:48 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:48 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:38:48 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "format": "json"}]: dispatch
Nov 29 05:38:48 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2efbb8e6-d3d3-430b-8165-2af4490ffea0", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:48 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:38:48.480 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:38:49 compute-0 ceph-mon[75176]: pgmap v1038: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 96 KiB/s wr, 11 op/s
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 116 KiB/s wr, 13 op/s
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp'
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp' to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta'
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "format": "json"}]: dispatch
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:38:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:38:50 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:50 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:38:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:38:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:50 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:38:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:38:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:51 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:38:51 compute-0 ceph-mon[75176]: pgmap v1039: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 116 KiB/s wr, 13 op/s
Nov 29 05:38:51 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "format": "json"}]: dispatch
Nov 29 05:38:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0001800252938399427 of space, bias 4.0, pg target 0.21603035260793124 quantized to 16 (current 16)
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:38:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:38:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 50 KiB/s wr, 7 op/s
Nov 29 05:38:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:38:53 compute-0 ceph-mon[75176]: pgmap v1040: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 50 KiB/s wr, 7 op/s
Nov 29 05:38:53 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "snap_name": "3393ee82-df40-40bd-8c8e-22fcc53b34d2", "format": "json"}]: dispatch
Nov 29 05:38:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:38:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:38:53 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:38:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 05:38:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:38:53 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:38:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:53 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:38:53 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:38:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:38:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:38:54 compute-0 podman[265581]: 2025-11-29 05:38:54.019915459 +0000 UTC m=+0.077428643 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 05:38:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 50 KiB/s wr, 6 op/s
Nov 29 05:38:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:38:54 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:38:55 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "snap_name": "3393ee82-df40-40bd-8c8e-22fcc53b34d2", "format": "json"}]: dispatch
Nov 29 05:38:55 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:55 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:38:55 compute-0 ceph-mon[75176]: pgmap v1041: 305 pgs: 305 active+clean; 52 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 50 KiB/s wr, 6 op/s
Nov 29 05:38:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:38:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 53 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 89 KiB/s wr, 11 op/s
Nov 29 05:38:56 compute-0 ceph-mon[75176]: pgmap v1042: 305 pgs: 305 active+clean; 53 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 89 KiB/s wr, 11 op/s
Nov 29 05:38:57 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:38:57 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:38:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:57 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:38:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:38:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:57 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:38:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:38:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:38:57 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:38:58 compute-0 podman[265601]: 2025-11-29 05:38:58.04180267 +0000 UTC m=+0.093405720 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 05:38:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 53 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 6 op/s
Nov 29 05:38:58 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:38:58 compute-0 ceph-mon[75176]: pgmap v1043: 305 pgs: 305 active+clean; 53 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 60 KiB/s wr, 6 op/s
Nov 29 05:38:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "snap_name": "3393ee82-df40-40bd-8c8e-22fcc53b34d2_ef38678e-a0e9-4751-9f3b-809b04461abf", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2_ef38678e-a0e9-4751-9f3b-809b04461abf, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:38:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp'
Nov 29 05:38:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp' to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta'
Nov 29 05:38:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2_ef38678e-a0e9-4751-9f3b-809b04461abf, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:38:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "snap_name": "3393ee82-df40-40bd-8c8e-22fcc53b34d2", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:38:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp'
Nov 29 05:38:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta.tmp' to config b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b/.meta'
Nov 29 05:38:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3393ee82-df40-40bd-8c8e-22fcc53b34d2, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:38:59 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "snap_name": "3393ee82-df40-40bd-8c8e-22fcc53b34d2_ef38678e-a0e9-4751-9f3b-809b04461abf", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:59 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "snap_name": "3393ee82-df40-40bd-8c8e-22fcc53b34d2", "force": true, "format": "json"}]: dispatch
Nov 29 05:38:59 compute-0 sudo[265628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:38:59 compute-0 sudo[265628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:38:59 compute-0 sudo[265628]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:59 compute-0 sudo[265653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:38:59 compute-0 sudo[265653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:38:59 compute-0 sudo[265653]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:59 compute-0 sudo[265678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:38:59 compute-0 sudo[265678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:38:59 compute-0 sudo[265678]: pam_unix(sudo:session): session closed for user root
Nov 29 05:38:59 compute-0 sudo[265703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:38:59 compute-0 sudo[265703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 53 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 84 KiB/s wr, 9 op/s
Nov 29 05:39:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:00 compute-0 sudo[265703]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:00 compute-0 ceph-mon[75176]: pgmap v1044: 305 pgs: 305 active+clean; 53 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 84 KiB/s wr, 9 op/s
Nov 29 05:39:00 compute-0 sudo[265759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:39:00 compute-0 sudo[265759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:00 compute-0 sudo[265759]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:00 compute-0 sudo[265784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:39:00 compute-0 sudo[265784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:00 compute-0 sudo[265784]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:00 compute-0 sudo[265809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:39:00 compute-0 sudo[265809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:00 compute-0 sudo[265809]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:00 compute-0 sudo[265834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 29 05:39:00 compute-0 sudo[265834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:00 compute-0 sudo[265834]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:39:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:39:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:39:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:39:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:39:00 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:39:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:39:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:39:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:39:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:39:00 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ad665ca2-66c7-4b31-b888-ec01e05fe420 does not exist
Nov 29 05:39:00 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 5bd038e2-18e9-406f-acf3-8d32515c3b49 does not exist
Nov 29 05:39:00 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 27b6421f-db6b-4424-afb1-a3a7fd36c04b does not exist
Nov 29 05:39:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:39:00 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:39:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:39:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:39:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:39:00 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:39:01 compute-0 sudo[265879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:39:01 compute-0 sudo[265879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:01 compute-0 sudo[265879]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:01 compute-0 sudo[265904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:39:01 compute-0 sudo[265904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:01 compute-0 sudo[265904]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:01 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:39:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 05:39:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:39:01 compute-0 sudo[265929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:39:01 compute-0 sudo[265929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:01 compute-0 sudo[265929]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:01 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:01 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:01 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:39:01 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:39:01 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:01 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:01 compute-0 sudo[265954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:39:01 compute-0 sudo[265954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:01 compute-0 podman[266021]: 2025-11-29 05:39:01.636861978 +0000 UTC m=+0.033090782 container create b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:39:01 compute-0 systemd[1]: Started libpod-conmon-b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99.scope.
Nov 29 05:39:01 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:39:01 compute-0 podman[266021]: 2025-11-29 05:39:01.622604753 +0000 UTC m=+0.018833587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:39:01 compute-0 podman[266021]: 2025-11-29 05:39:01.720908111 +0000 UTC m=+0.117136995 container init b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:39:01 compute-0 podman[266021]: 2025-11-29 05:39:01.726974667 +0000 UTC m=+0.123203491 container start b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 05:39:01 compute-0 podman[266021]: 2025-11-29 05:39:01.730432461 +0000 UTC m=+0.126661365 container attach b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:39:01 compute-0 peaceful_driscoll[266037]: 167 167
Nov 29 05:39:01 compute-0 systemd[1]: libpod-b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99.scope: Deactivated successfully.
Nov 29 05:39:01 compute-0 podman[266021]: 2025-11-29 05:39:01.735891354 +0000 UTC m=+0.132120188 container died b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d24688ffd8105db34b41dcc7b0c55af7b2833a7d46a74f835ba431b21bb55ecc-merged.mount: Deactivated successfully.
Nov 29 05:39:01 compute-0 podman[266021]: 2025-11-29 05:39:01.780673447 +0000 UTC m=+0.176902261 container remove b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_driscoll, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:39:01 compute-0 systemd[1]: libpod-conmon-b950fda0d0cd457e349fd24e78985f48c49d86382eacf5ae49ccf520b9f73b99.scope: Deactivated successfully.
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:39:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:01 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 05:39:01 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 05:39:01 compute-0 podman[266061]: 2025-11-29 05:39:01.977523408 +0000 UTC m=+0.044328183 container create 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/651b4bb8-257f-4b27-8e91-4460977c10fa/.meta.tmp'
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/651b4bb8-257f-4b27-8e91-4460977c10fa/.meta.tmp' to config b'/volumes/_nogroup/651b4bb8-257f-4b27-8e91-4460977c10fa/.meta'
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "format": "json"}]: dispatch
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 05:39:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:02 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:02 compute-0 systemd[1]: Started libpod-conmon-0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74.scope.
Nov 29 05:39:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:02 compute-0 podman[266061]: 2025-11-29 05:39:01.956601912 +0000 UTC m=+0.023406737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 53 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 7 op/s
Nov 29 05:39:02 compute-0 podman[266061]: 2025-11-29 05:39:02.060621368 +0000 UTC m=+0.127426173 container init 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:39:02 compute-0 podman[266061]: 2025-11-29 05:39:02.066135652 +0000 UTC m=+0.132940437 container start 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 05:39:02 compute-0 podman[266061]: 2025-11-29 05:39:02.069232867 +0000 UTC m=+0.136037672 container attach 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "46ac263c-91aa-4770-862a-dd35f490382b", "format": "json"}]: dispatch
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:46ac263c-91aa-4770-862a-dd35f490382b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:46ac263c-91aa-4770-862a-dd35f490382b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:02 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:02.147+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '46ac263c-91aa-4770-862a-dd35f490382b' of type subvolume
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '46ac263c-91aa-4770-862a-dd35f490382b' of type subvolume
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/46ac263c-91aa-4770-862a-dd35f490382b'' moved to trashcan
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:39:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:46ac263c-91aa-4770-862a-dd35f490382b, vol_name:cephfs) < ""
Nov 29 05:39:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 29 05:39:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 29 05:39:02 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:02 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "format": "json"}]: dispatch
Nov 29 05:39:02 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:02 compute-0 ceph-mon[75176]: pgmap v1045: 305 pgs: 305 active+clean; 53 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 64 KiB/s wr, 7 op/s
Nov 29 05:39:02 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "46ac263c-91aa-4770-862a-dd35f490382b", "format": "json"}]: dispatch
Nov 29 05:39:02 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "46ac263c-91aa-4770-862a-dd35f490382b", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:02 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 29 05:39:03 compute-0 objective_jang[266077]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:39:03 compute-0 objective_jang[266077]: --> relative data size: 1.0
Nov 29 05:39:03 compute-0 objective_jang[266077]: --> All data devices are unavailable
Nov 29 05:39:03 compute-0 systemd[1]: libpod-0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74.scope: Deactivated successfully.
Nov 29 05:39:03 compute-0 podman[266061]: 2025-11-29 05:39:03.144195488 +0000 UTC m=+1.211000313 container died 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:39:03 compute-0 systemd[1]: libpod-0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74.scope: Consumed 1.002s CPU time.
Nov 29 05:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d1bd1b047cb2b901a602d65702222141784bad63d5c6839396d23ecb8b67a9f-merged.mount: Deactivated successfully.
Nov 29 05:39:03 compute-0 podman[266061]: 2025-11-29 05:39:03.206840383 +0000 UTC m=+1.273645168 container remove 0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:39:03 compute-0 systemd[1]: libpod-conmon-0e3c9faf851ee63f57918bdd93a06ca91f72a45b1297a87e2bf665f5790a3b74.scope: Deactivated successfully.
Nov 29 05:39:03 compute-0 sudo[265954]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:03 compute-0 sudo[266118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:39:03 compute-0 sudo[266118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:03 compute-0 sudo[266118]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:03 compute-0 sudo[266143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:39:03 compute-0 sudo[266143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:03 compute-0 sudo[266143]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:03 compute-0 sudo[266168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:39:03 compute-0 sudo[266168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:03 compute-0 sudo[266168]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:03 compute-0 sudo[266193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:39:03 compute-0 sudo[266193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:03 compute-0 podman[266260]: 2025-11-29 05:39:03.835739754 +0000 UTC m=+0.046103775 container create 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 05:39:03 compute-0 systemd[1]: Started libpod-conmon-9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199.scope.
Nov 29 05:39:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:39:03 compute-0 podman[266260]: 2025-11-29 05:39:03.904234831 +0000 UTC m=+0.114598842 container init 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:39:03 compute-0 podman[266260]: 2025-11-29 05:39:03.810176746 +0000 UTC m=+0.020540817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:39:03 compute-0 podman[266260]: 2025-11-29 05:39:03.911927118 +0000 UTC m=+0.122291109 container start 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:39:03 compute-0 podman[266260]: 2025-11-29 05:39:03.914962561 +0000 UTC m=+0.125326572 container attach 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:39:03 compute-0 nifty_mestorf[266276]: 167 167
Nov 29 05:39:03 compute-0 systemd[1]: libpod-9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199.scope: Deactivated successfully.
Nov 29 05:39:03 compute-0 podman[266260]: 2025-11-29 05:39:03.916508828 +0000 UTC m=+0.126872819 container died 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-80d838b1d52f2bda0ef28a1e27263cd5d5d4e076ebe36bf3d998a982377eb150-merged.mount: Deactivated successfully.
Nov 29 05:39:03 compute-0 ceph-mon[75176]: osdmap e145: 3 total, 3 up, 3 in
Nov 29 05:39:03 compute-0 podman[266260]: 2025-11-29 05:39:03.957565921 +0000 UTC m=+0.167929952 container remove 9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 05:39:03 compute-0 systemd[1]: libpod-conmon-9d5c388b78415596a436c4d43f10a319bc169375944db370cf8edab765003199.scope: Deactivated successfully.
Nov 29 05:39:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 53 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 76 KiB/s wr, 9 op/s
Nov 29 05:39:04 compute-0 podman[266300]: 2025-11-29 05:39:04.098777107 +0000 UTC m=+0.036999686 container create 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 05:39:04 compute-0 systemd[1]: Started libpod-conmon-6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f.scope.
Nov 29 05:39:04 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b0482aceed5226ec62de5591edc4768f6dd0e17ae2203bfaaf50b48b907637/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b0482aceed5226ec62de5591edc4768f6dd0e17ae2203bfaaf50b48b907637/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b0482aceed5226ec62de5591edc4768f6dd0e17ae2203bfaaf50b48b907637/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b0482aceed5226ec62de5591edc4768f6dd0e17ae2203bfaaf50b48b907637/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:04 compute-0 podman[266300]: 2025-11-29 05:39:04.082734509 +0000 UTC m=+0.020957108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:39:04 compute-0 podman[266300]: 2025-11-29 05:39:04.184724296 +0000 UTC m=+0.122946935 container init 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 05:39:04 compute-0 podman[266300]: 2025-11-29 05:39:04.195709822 +0000 UTC m=+0.133932401 container start 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 05:39:04 compute-0 podman[266300]: 2025-11-29 05:39:04.19849433 +0000 UTC m=+0.136716979 container attach 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 05:39:04 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:04 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:39:04 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:04 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:39:04 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:04 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:04 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:04 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]: {
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:     "0": [
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:         {
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "devices": [
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "/dev/loop3"
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             ],
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_name": "ceph_lv0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_size": "21470642176",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "name": "ceph_lv0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "tags": {
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.cluster_name": "ceph",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.crush_device_class": "",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.encrypted": "0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.osd_id": "0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.type": "block",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.vdo": "0"
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             },
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "type": "block",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "vg_name": "ceph_vg0"
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:         }
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:     ],
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:     "1": [
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:         {
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "devices": [
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "/dev/loop4"
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             ],
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_name": "ceph_lv1",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_size": "21470642176",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "name": "ceph_lv1",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "tags": {
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.cluster_name": "ceph",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.crush_device_class": "",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.encrypted": "0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.osd_id": "1",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.type": "block",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.vdo": "0"
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             },
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "type": "block",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "vg_name": "ceph_vg1"
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:         }
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:     ],
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:     "2": [
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:         {
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "devices": [
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "/dev/loop5"
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             ],
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_name": "ceph_lv2",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_size": "21470642176",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "name": "ceph_lv2",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "tags": {
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.cluster_name": "ceph",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.crush_device_class": "",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.encrypted": "0",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.osd_id": "2",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.type": "block",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:                 "ceph.vdo": "0"
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             },
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "type": "block",
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:             "vg_name": "ceph_vg2"
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:         }
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]:     ]
Nov 29 05:39:04 compute-0 dreamy_shtern[266317]: }
Nov 29 05:39:04 compute-0 systemd[1]: libpod-6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f.scope: Deactivated successfully.
Nov 29 05:39:04 compute-0 podman[266300]: 2025-11-29 05:39:04.957133269 +0000 UTC m=+0.895355858 container died 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:39:04 compute-0 ceph-mon[75176]: pgmap v1047: 305 pgs: 305 active+clean; 53 MiB data, 245 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 76 KiB/s wr, 9 op/s
Nov 29 05:39:04 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:04 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:04 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0b0482aceed5226ec62de5591edc4768f6dd0e17ae2203bfaaf50b48b907637-merged.mount: Deactivated successfully.
Nov 29 05:39:05 compute-0 podman[266300]: 2025-11-29 05:39:05.001342669 +0000 UTC m=+0.939565248 container remove 6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:39:05 compute-0 systemd[1]: libpod-conmon-6b5e9f3cb34406ad0a6dd41608d4f70ceab83430b6ec625ad576db4f1489119f.scope: Deactivated successfully.
Nov 29 05:39:05 compute-0 sudo[266193]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:05 compute-0 sudo[266340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:39:05 compute-0 sudo[266340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:05 compute-0 sudo[266340]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:05 compute-0 sudo[266365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:39:05 compute-0 sudo[266365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:05 compute-0 sudo[266365]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:05 compute-0 sudo[266390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:39:05 compute-0 sudo[266390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:05 compute-0 sudo[266390]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:05 compute-0 sudo[266415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:39:05 compute-0 sudo[266415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:05 compute-0 podman[266482]: 2025-11-29 05:39:05.539540956 +0000 UTC m=+0.041435302 container create 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:39:05 compute-0 systemd[1]: Started libpod-conmon-751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3.scope.
Nov 29 05:39:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:39:05 compute-0 podman[266482]: 2025-11-29 05:39:05.589217878 +0000 UTC m=+0.091112224 container init 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:39:05 compute-0 podman[266482]: 2025-11-29 05:39:05.59509737 +0000 UTC m=+0.096991696 container start 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:39:05 compute-0 podman[266482]: 2025-11-29 05:39:05.598644647 +0000 UTC m=+0.100539003 container attach 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 05:39:05 compute-0 funny_kilby[266498]: 167 167
Nov 29 05:39:05 compute-0 systemd[1]: libpod-751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3.scope: Deactivated successfully.
Nov 29 05:39:05 compute-0 podman[266482]: 2025-11-29 05:39:05.599666371 +0000 UTC m=+0.101560707 container died 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 05:39:05 compute-0 podman[266482]: 2025-11-29 05:39:05.519280927 +0000 UTC m=+0.021175313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:39:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdd9c4fea18c4173057f9700c93e33590b2b59a8c8d9918b1dead178755b9fc9-merged.mount: Deactivated successfully.
Nov 29 05:39:05 compute-0 podman[266482]: 2025-11-29 05:39:05.637185558 +0000 UTC m=+0.139079894 container remove 751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kilby, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 05:39:05 compute-0 systemd[1]: libpod-conmon-751ed3fd43065d2a6419184701aa260af6296625d69b85c42e13eaf12c6a93b3.scope: Deactivated successfully.
Nov 29 05:39:05 compute-0 podman[266522]: 2025-11-29 05:39:05.81293116 +0000 UTC m=+0.055404702 container create a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 29 05:39:05 compute-0 systemd[1]: Started libpod-conmon-a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a.scope.
Nov 29 05:39:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b63aa0fc8e94951f876c60a57a99e7d7a3e3c02cd91d8bf80626207f8a8b40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b63aa0fc8e94951f876c60a57a99e7d7a3e3c02cd91d8bf80626207f8a8b40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b63aa0fc8e94951f876c60a57a99e7d7a3e3c02cd91d8bf80626207f8a8b40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b63aa0fc8e94951f876c60a57a99e7d7a3e3c02cd91d8bf80626207f8a8b40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:39:05 compute-0 podman[266522]: 2025-11-29 05:39:05.797311721 +0000 UTC m=+0.039785273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:39:05 compute-0 podman[266522]: 2025-11-29 05:39:05.901885961 +0000 UTC m=+0.144359493 container init a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:39:05 compute-0 podman[266522]: 2025-11-29 05:39:05.912866537 +0000 UTC m=+0.155340079 container start a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:39:05 compute-0 podman[266522]: 2025-11-29 05:39:05.916152636 +0000 UTC m=+0.158626198 container attach a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 05:39:05 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 69 KiB/s wr, 7 op/s
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]: {
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "osd_id": 0,
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "type": "bluestore"
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:     },
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "osd_id": 1,
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "type": "bluestore"
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:     },
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "osd_id": 2,
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:         "type": "bluestore"
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]:     }
Nov 29 05:39:06 compute-0 gallant_dubinsky[266538]: }
Nov 29 05:39:06 compute-0 systemd[1]: libpod-a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a.scope: Deactivated successfully.
Nov 29 05:39:06 compute-0 conmon[266538]: conmon a8c95b8211b76115d6e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a.scope/container/memory.events
Nov 29 05:39:06 compute-0 podman[266522]: 2025-11-29 05:39:06.808979612 +0000 UTC m=+1.051453144 container died a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 05:39:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 69 KiB/s wr, 7 op/s
Nov 29 05:39:09 compute-0 ceph-mon[75176]: pgmap v1048: 305 pgs: 305 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 69 KiB/s wr, 7 op/s
Nov 29 05:39:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7b63aa0fc8e94951f876c60a57a99e7d7a3e3c02cd91d8bf80626207f8a8b40-merged.mount: Deactivated successfully.
Nov 29 05:39:09 compute-0 podman[266522]: 2025-11-29 05:39:09.629431343 +0000 UTC m=+3.871904875 container remove a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:39:09 compute-0 systemd[1]: libpod-conmon-a8c95b8211b76115d6e9349be2845040a4ce0c8355ed0f57d524b023f0b92e1a.scope: Deactivated successfully.
Nov 29 05:39:09 compute-0 sudo[266415]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:09 compute-0 podman[266582]: 2025-11-29 05:39:09.664039161 +0000 UTC m=+1.705171606 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 05:39:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:39:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:39:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:39:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:39:09 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev c6022f75-d48d-43cc-9264-a6616bc9d011 does not exist
Nov 29 05:39:09 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 284a4067-78d0-496c-a010-29f6db773d94 does not exist
Nov 29 05:39:09 compute-0 sudo[266603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:39:09 compute-0 sudo[266603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:09 compute-0 sudo[266603]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:09 compute-0 sudo[266628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:39:09 compute-0 sudo[266628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:39:09 compute-0 sudo[266628]: pam_unix(sudo:session): session closed for user root
Nov 29 05:39:09 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:39:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 05:39:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:39:09 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:39:09 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 05:39:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:09 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:39:09 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:39:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 7 op/s
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp'
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp' to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta'
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "format": "json"}]: dispatch
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:10 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "format": "json"}]: dispatch
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:651b4bb8-257f-4b27-8e91-4460977c10fa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:651b4bb8-257f-4b27-8e91-4460977c10fa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:10 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:10.216+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '651b4bb8-257f-4b27-8e91-4460977c10fa' of type subvolume
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '651b4bb8-257f-4b27-8e91-4460977c10fa' of type subvolume
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/651b4bb8-257f-4b27-8e91-4460977c10fa'' moved to trashcan
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:39:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:651b4bb8-257f-4b27-8e91-4460977c10fa, vol_name:cephfs) < ""
Nov 29 05:39:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:10 compute-0 ceph-mon[75176]: pgmap v1049: 305 pgs: 305 active+clean; 53 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 69 KiB/s wr, 7 op/s
Nov 29 05:39:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:39:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:39:10 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:39:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:39:10 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:10 compute-0 ceph-mon[75176]: pgmap v1050: 305 pgs: 305 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 7 op/s
Nov 29 05:39:10 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:10 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "format": "json"}]: dispatch
Nov 29 05:39:10 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:39:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:39:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:39:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:39:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:39:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:39:11 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "format": "json"}]: dispatch
Nov 29 05:39:11 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "651b4bb8-257f-4b27-8e91-4460977c10fa", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 7 op/s
Nov 29 05:39:12 compute-0 ceph-mon[75176]: pgmap v1051: 305 pgs: 305 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 66 KiB/s wr, 7 op/s
Nov 29 05:39:13 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:39:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:39:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:13 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:39:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:13 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "snap_name": "eb1bc411-f674-4652-a897-abf914faeef2", "format": "json"}]: dispatch
Nov 29 05:39:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb1bc411-f674-4652-a897-abf914faeef2, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:eb1bc411-f674-4652-a897-abf914faeef2, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:39:13.754 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:39:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:39:13.754 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:39:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:39:13.755 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:39:13 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:13 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:13 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 461 B/s rd, 60 KiB/s wr, 6 op/s
Nov 29 05:39:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:39:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297539908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:39:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:39:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1297539908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:39:14 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:39:14 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "snap_name": "eb1bc411-f674-4652-a897-abf914faeef2", "format": "json"}]: dispatch
Nov 29 05:39:14 compute-0 ceph-mon[75176]: pgmap v1052: 305 pgs: 305 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 461 B/s rd, 60 KiB/s wr, 6 op/s
Nov 29 05:39:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1297539908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:39:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1297539908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:39:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 29 05:39:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 29 05:39:15 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 29 05:39:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 72 KiB/s wr, 8 op/s
Nov 29 05:39:16 compute-0 ceph-mon[75176]: osdmap e146: 3 total, 3 up, 3 in
Nov 29 05:39:16 compute-0 ceph-mon[75176]: pgmap v1054: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 72 KiB/s wr, 8 op/s
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:39:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 05:39:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:39:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:39:17 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:39:17 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "snap_name": "eb1bc411-f674-4652-a897-abf914faeef2_bdaf9ec6-4ab8-492f-a30a-f313b38f5d36", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb1bc411-f674-4652-a897-abf914faeef2_bdaf9ec6-4ab8-492f-a30a-f313b38f5d36, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp'
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp' to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta'
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb1bc411-f674-4652-a897-abf914faeef2_bdaf9ec6-4ab8-492f-a30a-f313b38f5d36, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "snap_name": "eb1bc411-f674-4652-a897-abf914faeef2", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb1bc411-f674-4652-a897-abf914faeef2, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp'
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta.tmp' to config b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e/.meta'
Nov 29 05:39:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:eb1bc411-f674-4652-a897-abf914faeef2, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 72 KiB/s wr, 8 op/s
Nov 29 05:39:18 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:18 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:18 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "snap_name": "eb1bc411-f674-4652-a897-abf914faeef2_bdaf9ec6-4ab8-492f-a30a-f313b38f5d36", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:18 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "snap_name": "eb1bc411-f674-4652-a897-abf914faeef2", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:18 compute-0 ceph-mon[75176]: pgmap v1055: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 72 KiB/s wr, 8 op/s
Nov 29 05:39:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 05:39:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:20 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:20 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:39:20 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:39:20 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:39:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:20 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:20 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:21 compute-0 ceph-mon[75176]: pgmap v1056: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 05:39:21 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:39:21 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:21 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:21 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "format": "json"}]: dispatch
Nov 29 05:39:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:adc37617-82af-4ff2-b1ff-41acd332035e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:adc37617-82af-4ff2-b1ff-41acd332035e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:21 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:21.155+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'adc37617-82af-4ff2-b1ff-41acd332035e' of type subvolume
Nov 29 05:39:21 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'adc37617-82af-4ff2-b1ff-41acd332035e' of type subvolume
Nov 29 05:39:21 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/adc37617-82af-4ff2-b1ff-41acd332035e'' moved to trashcan
Nov 29 05:39:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:39:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:adc37617-82af-4ff2-b1ff-41acd332035e, vol_name:cephfs) < ""
Nov 29 05:39:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 05:39:22 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:22 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "format": "json"}]: dispatch
Nov 29 05:39:22 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "adc37617-82af-4ff2-b1ff-41acd332035e", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:22 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:39:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/.meta.tmp'
Nov 29 05:39:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/.meta.tmp' to config b'/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/.meta'
Nov 29 05:39:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:39:22 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "format": "json"}]: dispatch
Nov 29 05:39:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:39:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:39:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:22 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 29 05:39:23 compute-0 ceph-mon[75176]: pgmap v1057: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 80 KiB/s wr, 9 op/s
Nov 29 05:39:23 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 29 05:39:23 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s wr, 4 op/s
Nov 29 05:39:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "format": "json"}]: dispatch
Nov 29 05:39:24 compute-0 ceph-mon[75176]: osdmap e147: 3 total, 3 up, 3 in
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:39:24 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:39:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 05:39:24 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:39:24 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:39:24 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5ebeea41-cd85-43e6-b90c-d40733412d03/.meta.tmp'
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5ebeea41-cd85-43e6-b90c-d40733412d03/.meta.tmp' to config b'/volumes/_nogroup/5ebeea41-cd85-43e6-b90c-d40733412d03/.meta'
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "format": "json"}]: dispatch
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 05:39:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 05:39:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:24 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:25 compute-0 podman[266656]: 2025-11-29 05:39:25.003092163 +0000 UTC m=+0.057360948 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 05:39:25 compute-0 ceph-mon[75176]: pgmap v1059: 305 pgs: 305 active+clean; 54 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s wr, 4 op/s
Nov 29 05:39:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:39:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:39:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:39:25 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 55 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 88 KiB/s wr, 9 op/s
Nov 29 05:39:26 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:39:26 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:39:26 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:26 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "format": "json"}]: dispatch
Nov 29 05:39:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 05:39:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/.meta.tmp'
Nov 29 05:39:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/.meta.tmp' to config b'/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/.meta'
Nov 29 05:39:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 05:39:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "format": "json"}]: dispatch
Nov 29 05:39:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 05:39:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 05:39:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:27 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:27 compute-0 ceph-mon[75176]: pgmap v1060: 305 pgs: 305 active+clean; 55 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 88 KiB/s wr, 9 op/s
Nov 29 05:39:27 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:39:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:39:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:39:27 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:39:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:27 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:27 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 29 05:39:27 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:27.987892) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:39:27 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 29 05:39:27 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394767987956, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2381, "num_deletes": 254, "total_data_size": 2856436, "memory_usage": 2913288, "flush_reason": "Manual Compaction"}
Nov 29 05:39:27 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394768007419, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 2796236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21143, "largest_seqno": 23523, "table_properties": {"data_size": 2785829, "index_size": 6325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25686, "raw_average_key_size": 21, "raw_value_size": 2763316, "raw_average_value_size": 2302, "num_data_blocks": 280, "num_entries": 1200, "num_filter_entries": 1200, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394620, "oldest_key_time": 1764394620, "file_creation_time": 1764394767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 19561 microseconds, and 7347 cpu microseconds.
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.007458) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 2796236 bytes OK
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.007481) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.008986) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.009019) EVENT_LOG_v1 {"time_micros": 1764394768009014, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.009034) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 2845623, prev total WAL file size 2845623, number of live WAL files 2.
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.009719) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(2730KB)], [50(7561KB)]
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394768009749, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10539189, "oldest_snapshot_seqno": -1}
Nov 29 05:39:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5157 keys, 8783966 bytes, temperature: kUnknown
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394768064582, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8783966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8747553, "index_size": 22415, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 127047, "raw_average_key_size": 24, "raw_value_size": 8652875, "raw_average_value_size": 1677, "num_data_blocks": 934, "num_entries": 5157, "num_filter_entries": 5157, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.064818) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8783966 bytes
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.066510) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.9 rd, 160.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.4 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(6.9) write-amplify(3.1) OK, records in: 5683, records dropped: 526 output_compression: NoCompression
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.066528) EVENT_LOG_v1 {"time_micros": 1764394768066518, "job": 26, "event": "compaction_finished", "compaction_time_micros": 54910, "compaction_time_cpu_micros": 18463, "output_level": 6, "num_output_files": 1, "total_output_size": 8783966, "num_input_records": 5683, "num_output_records": 5157, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:39:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 55 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 88 KiB/s wr, 9 op/s
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394768067048, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394768068443, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.009685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.068494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.068500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.068503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.068506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:39:28 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:39:28.068508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:39:28 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:28 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "format": "json"}]: dispatch
Nov 29 05:39:28 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:39:28 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:28 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "format": "json"}]: dispatch
Nov 29 05:39:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5ebeea41-cd85-43e6-b90c-d40733412d03, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5ebeea41-cd85-43e6-b90c-d40733412d03, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:28 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5ebeea41-cd85-43e6-b90c-d40733412d03' of type subvolume
Nov 29 05:39:28 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:28.608+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5ebeea41-cd85-43e6-b90c-d40733412d03' of type subvolume
Nov 29 05:39:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 05:39:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5ebeea41-cd85-43e6-b90c-d40733412d03'' moved to trashcan
Nov 29 05:39:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:39:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5ebeea41-cd85-43e6-b90c-d40733412d03, vol_name:cephfs) < ""
Nov 29 05:39:29 compute-0 podman[266676]: 2025-11-29 05:39:29.027168779 +0000 UTC m=+0.074248068 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 05:39:29 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:39:29 compute-0 ceph-mon[75176]: pgmap v1061: 305 pgs: 305 active+clean; 55 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 88 KiB/s wr, 9 op/s
Nov 29 05:39:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 55 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 107 KiB/s wr, 11 op/s
Nov 29 05:39:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "format": "json"}]: dispatch
Nov 29 05:39:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5ebeea41-cd85-43e6-b90c-d40733412d03", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 29 05:39:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 29 05:39:30 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 29 05:39:30 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:39:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:39:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:30 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 05:39:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:39:31 compute-0 ceph-mon[75176]: pgmap v1062: 305 pgs: 305 active+clean; 55 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 107 KiB/s wr, 11 op/s
Nov 29 05:39:31 compute-0 ceph-mon[75176]: osdmap e148: 3 total, 3 up, 3 in
Nov 29 05:39:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:39:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:39:31 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:39:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 05:39:31 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:39:31 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:39:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:39:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:39:31 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:39:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 55 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 574 B/s rd, 120 KiB/s wr, 11 op/s
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee8187c1-56b3-4603-8456-6c0a4e9f03fd/.meta.tmp'
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee8187c1-56b3-4603-8456-6c0a4e9f03fd/.meta.tmp' to config b'/volumes/_nogroup/ee8187c1-56b3-4603-8456-6c0a4e9f03fd/.meta'
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "format": "json"}]: dispatch
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 05:39:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:32 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:32 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:39:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:39:32 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:39:32 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp'
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp' to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta'
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "format": "json"}]: dispatch
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:39:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:39:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:32 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:32 compute-0 nova_compute[254898]: 2025-11-29 05:39:32.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:39:32 compute-0 nova_compute[254898]: 2025-11-29 05:39:32.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:39:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:39:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:39:33 compute-0 ceph-mon[75176]: pgmap v1064: 305 pgs: 305 active+clean; 55 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 574 B/s rd, 120 KiB/s wr, 11 op/s
Nov 29 05:39:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "format": "json"}]: dispatch
Nov 29 05:39:33 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:33 compute-0 nova_compute[254898]: 2025-11-29 05:39:33.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 55 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 107 KiB/s wr, 10 op/s
Nov 29 05:39:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "format": "json"}]: dispatch
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 05:39:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:39:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 05:39:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:39:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3
Nov 29 05:39:34 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14/887c3b4c-9944-468a-a71d-7c57e0e4aba3],prefix=session evict} (starting...)
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "format": "json"}]: dispatch
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:34 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:34.373+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c7cad0a5-6ce4-4ca6-994f-ac3363a79f14' of type subvolume
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c7cad0a5-6ce4-4ca6-994f-ac3363a79f14' of type subvolume
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c7cad0a5-6ce4-4ca6-994f-ac3363a79f14'' moved to trashcan
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c7cad0a5-6ce4-4ca6-994f-ac3363a79f14, vol_name:cephfs) < ""
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:39:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:39:34 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:39:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:34 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:34 compute-0 nova_compute[254898]: 2025-11-29 05:39:34.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:39:34 compute-0 nova_compute[254898]: 2025-11-29 05:39:34.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:39:34 compute-0 nova_compute[254898]: 2025-11-29 05:39:34.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:39:35 compute-0 ceph-mon[75176]: pgmap v1065: 305 pgs: 305 active+clean; 55 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 107 KiB/s wr, 10 op/s
Nov 29 05:39:35 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:39:35 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "format": "json"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c7cad0a5-6ce4-4ca6-994f-ac3363a79f14", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db", "format": "json"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:39:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:39:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 05:39:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4fc216c7-7565-440e-ba91-0a6f65473f45/.meta.tmp'
Nov 29 05:39:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4fc216c7-7565-440e-ba91-0a6f65473f45/.meta.tmp' to config b'/volumes/_nogroup/4fc216c7-7565-440e-ba91-0a6f65473f45/.meta'
Nov 29 05:39:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 05:39:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "format": "json"}]: dispatch
Nov 29 05:39:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 05:39:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 05:39:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 56 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 119 KiB/s wr, 11 op/s
Nov 29 05:39:36 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db", "format": "json"}]: dispatch
Nov 29 05:39:36 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:36 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "format": "json"}]: dispatch
Nov 29 05:39:36 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:36 compute-0 ceph-mon[75176]: pgmap v1066: 305 pgs: 305 active+clean; 56 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 119 KiB/s wr, 11 op/s
Nov 29 05:39:36 compute-0 nova_compute[254898]: 2025-11-29 05:39:36.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:39:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 05:39:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/.meta.tmp'
Nov 29 05:39:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/.meta.tmp' to config b'/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/.meta'
Nov 29 05:39:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 05:39:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "format": "json"}]: dispatch
Nov 29 05:39:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 05:39:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 05:39:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:37 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:39:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7231 writes, 27K keys, 7231 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7231 writes, 1573 syncs, 4.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1451 writes, 3407 keys, 1451 commit groups, 1.0 writes per commit group, ingest: 1.89 MB, 0.00 MB/s
                                           Interval WAL: 1451 writes, 597 syncs, 2.43 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:39:37 compute-0 nova_compute[254898]: 2025-11-29 05:39:37.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:39:37 compute-0 nova_compute[254898]: 2025-11-29 05:39:37.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:39:37 compute-0 nova_compute[254898]: 2025-11-29 05:39:37.998 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:39:37 compute-0 nova_compute[254898]: 2025-11-29 05:39:37.999 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:39:38 compute-0 nova_compute[254898]: 2025-11-29 05:39:37.999 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:39:38 compute-0 nova_compute[254898]: 2025-11-29 05:39:38.000 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:39:38 compute-0 nova_compute[254898]: 2025-11-29 05:39:38.000 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:39:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 56 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 119 KiB/s wr, 11 op/s
Nov 29 05:39:38 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:38 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:39:38 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:39:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 05:39:38 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:39:38 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:39:38 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:38 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:38 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:38 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:39:38 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:39:38 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:38 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:38 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:39:38 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/517717071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:39:38 compute-0 nova_compute[254898]: 2025-11-29 05:39:38.460 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:39:38 compute-0 nova_compute[254898]: 2025-11-29 05:39:38.655 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:39:38 compute-0 nova_compute[254898]: 2025-11-29 05:39:38.656 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5082MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:39:38 compute-0 nova_compute[254898]: 2025-11-29 05:39:38.656 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:39:38 compute-0 nova_compute[254898]: 2025-11-29 05:39:38.657 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:39:38 compute-0 nova_compute[254898]: 2025-11-29 05:39:38.726 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:39:38 compute-0 nova_compute[254898]: 2025-11-29 05:39:38.727 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:39:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "format": "json"}]: dispatch
Nov 29 05:39:38 compute-0 ceph-mon[75176]: pgmap v1067: 305 pgs: 305 active+clean; 56 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 119 KiB/s wr, 11 op/s
Nov 29 05:39:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:39:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:39:38 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:39:38 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/517717071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:39:38 compute-0 nova_compute[254898]: 2025-11-29 05:39:38.750 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db", "target_sub_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, target_sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp'
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp' to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta'
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.clone_index] tracking-id e2d24acc-59c4-4926-91ba-61c4618234e2 for path b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944'
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp'
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp' to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta'
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, target_sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.176+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.176+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.176+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.176+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.176+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 70cb9e84-4e7b-4e83-b5ff-872d8a0e3944)
Nov 29 05:39:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.200+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.200+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.200+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.200+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:39.200+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 70cb9e84-4e7b-4e83-b5ff-872d8a0e3944) -- by 0 seconds
Nov 29 05:39:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:39:39 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3650996313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp'
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp' to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta'
Nov 29 05:39:39 compute-0 nova_compute[254898]: 2025-11-29 05:39:39.247 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:39:39 compute-0 nova_compute[254898]: 2025-11-29 05:39:39.252 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:39:39 compute-0 nova_compute[254898]: 2025-11-29 05:39:39.264 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:39:39 compute-0 nova_compute[254898]: 2025-11-29 05:39:39.265 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:39:39 compute-0 nova_compute[254898]: 2025-11-29 05:39:39.265 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:39:39 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:39 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:39 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db", "target_sub_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 05:39:39 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 05:39:39 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3650996313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 05:39:40 compute-0 podman[266773]: 2025-11-29 05:39:40.007414343 +0000 UTC m=+0.060722135 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 05:39:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 57 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 141 KiB/s wr, 15 op/s
Nov 29 05:39:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:40 compute-0 nova_compute[254898]: 2025-11-29 05:39:40.267 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:39:40 compute-0 nova_compute[254898]: 2025-11-29 05:39:40.267 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:39:40 compute-0 nova_compute[254898]: 2025-11-29 05:39:40.267 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:39:40 compute-0 nova_compute[254898]: 2025-11-29 05:39:40.283 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:39:40 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:40 compute-0 ceph-mon[75176]: pgmap v1068: 305 pgs: 305 active+clean; 57 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 141 KiB/s wr, 15 op/s
Nov 29 05:39:40 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.csskcz(active, since 30m)
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:39:41
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images', 'vms']
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:39:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:39:41 compute-0 ceph-mon[75176]: mgrmap e14: compute-0.csskcz(active, since 30m)
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 57 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 520 B/s rd, 120 KiB/s wr, 13 op/s
Nov 29 05:39:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:39:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2999 syncs, 3.64 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3872 writes, 13K keys, 3872 commit groups, 1.0 writes per commit group, ingest: 20.11 MB, 0.03 MB/s
                                           Interval WAL: 3872 writes, 1699 syncs, 2.28 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:39:42 compute-0 ceph-mon[75176]: pgmap v1069: 305 pgs: 305 active+clean; 57 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 520 B/s rd, 120 KiB/s wr, 13 op/s
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.snap/54db2b9e-cb54-440e-8afd-6c23560987db/54e69477-7697-43a2-9122-006fb641f43b' to b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/75c755e3-5ba3-4412-8578-c62be99c7fab'
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5b16d258-3e4e-4612-860f-4a4dc4e6aef6/.meta.tmp'
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5b16d258-3e4e-4612-860f-4a4dc4e6aef6/.meta.tmp' to config b'/volumes/_nogroup/5b16d258-3e4e-4612-860f-4a4dc4e6aef6/.meta'
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "format": "json"}]: dispatch
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp'
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp' to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta'
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.clone_index] untracking e2d24acc-59c4-4926-91ba-61c4618234e2
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp'
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp' to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta'
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp'
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta.tmp' to config b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944/.meta'
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 70cb9e84-4e7b-4e83-b5ff-872d8a0e3944)
Nov 29 05:39:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 05:39:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:39:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:39:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 05:39:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_06420fd0-e9c0-463d-9475-8429a0c8fd0d", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_06420fd0-e9c0-463d-9475-8429a0c8fd0d", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_06420fd0-e9c0-463d-9475-8429a0c8fd0d", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:39:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:39:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:39:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:43 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_06420fd0-e9c0-463d-9475-8429a0c8fd0d", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_06420fd0-e9c0-463d-9475-8429a0c8fd0d", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 57 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 118 KiB/s wr, 12 op/s
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 05:39:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:39:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 05:39:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:39:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7
Nov 29 05:39:44 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d/e125f618-e0d3-4201-8eee-2d8020e28da7],prefix=session evict} (starting...)
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "format": "json"}]: dispatch
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:44 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:44.638+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '06420fd0-e9c0-463d-9475-8429a0c8fd0d' of type subvolume
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '06420fd0-e9c0-463d-9475-8429a0c8fd0d' of type subvolume
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/06420fd0-e9c0-463d-9475-8429a0c8fd0d'' moved to trashcan
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:39:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:06420fd0-e9c0-463d-9475-8429a0c8fd0d, vol_name:cephfs) < ""
Nov 29 05:39:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:39:44 compute-0 ceph-mon[75176]: pgmap v1070: 305 pgs: 305 active+clean; 57 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 118 KiB/s wr, 12 op/s
Nov 29 05:39:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:39:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:39:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) v1
Nov 29 05:39:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:39:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) v1
Nov 29 05:39:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:39:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:39:45 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 05:39:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:39:45 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:39:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "format": "json"}]: dispatch
Nov 29 05:39:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "06420fd0-e9c0-463d-9475-8429a0c8fd0d", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice", "format": "json"}]: dispatch
Nov 29 05:39:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice"}]: dispatch
Nov 29 05:39:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Nov 29 05:39:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice", "format": "json"}]: dispatch
Nov 29 05:39:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 57 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 184 KiB/s wr, 19 op/s
Nov 29 05:39:46 compute-0 ceph-mon[75176]: pgmap v1071: 305 pgs: 305 active+clean; 57 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 184 KiB/s wr, 19 op/s
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/adec6cb7-3928-4a56-9d48-76b4d10cc25a/.meta.tmp'
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/adec6cb7-3928-4a56-9d48-76b4d10cc25a/.meta.tmp' to config b'/volumes/_nogroup/adec6cb7-3928-4a56-9d48-76b4d10cc25a/.meta'
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "format": "json"}]: dispatch
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 05:39:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:47 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:47 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:47 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "format": "json"}]: dispatch
Nov 29 05:39:47 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 05:39:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:39:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7984 writes, 30K keys, 7984 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 7984 writes, 1865 syncs, 4.28 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2353 writes, 6787 keys, 2353 commit groups, 1.0 writes per commit group, ingest: 7.64 MB, 0.01 MB/s
                                           Interval WAL: 2353 writes, 1005 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/.meta.tmp'
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/.meta.tmp' to config b'/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/.meta'
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "format": "json"}]: dispatch
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 05:39:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 05:39:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:39:47 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 57 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 128 KiB/s wr, 13 op/s
Nov 29 05:39:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:39:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:48 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:39:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:48 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:39:48 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "format": "json"}]: dispatch
Nov 29 05:39:48 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:39:48 compute-0 ceph-mon[75176]: pgmap v1072: 305 pgs: 305 active+clean; 57 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 128 KiB/s wr, 13 op/s
Nov 29 05:39:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:49 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:50 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Check health
Nov 29 05:39:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 58 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 178 KiB/s wr, 19 op/s
Nov 29 05:39:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:50 compute-0 ceph-mon[75176]: pgmap v1073: 305 pgs: 305 active+clean; 58 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 178 KiB/s wr, 19 op/s
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:39:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:39:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:51 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 05:39:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_dddb87ae-5fcb-4c01-90f6-c57d130f8474", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_dddb87ae-5fcb-4c01-90f6-c57d130f8474", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_dddb87ae-5fcb-4c01-90f6-c57d130f8474", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00027146873168587614 of space, bias 4.0, pg target 0.32576247802305136 quantized to 16 (current 16)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 8.266792016669923e-07 of space, bias 1.0, pg target 0.0002480037605000977 quantized to 32 (current 32)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "format": "json"}]: dispatch
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:51 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:51.587+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'adec6cb7-3928-4a56-9d48-76b4d10cc25a' of type subvolume
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'adec6cb7-3928-4a56-9d48-76b4d10cc25a' of type subvolume
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/adec6cb7-3928-4a56-9d48-76b4d10cc25a'' moved to trashcan
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:adec6cb7-3928-4a56-9d48-76b4d10cc25a, vol_name:cephfs) < ""
Nov 29 05:39:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_dddb87ae-5fcb-4c01-90f6-c57d130f8474", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_dddb87ae-5fcb-4c01-90f6-c57d130f8474", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:39:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 05:39:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:39:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:39:51 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:51 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 58 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 115 KiB/s wr, 13 op/s
Nov 29 05:39:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "format": "json"}]: dispatch
Nov 29 05:39:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "adec6cb7-3928-4a56-9d48-76b4d10cc25a", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:39:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:39:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:52 compute-0 ceph-mon[75176]: pgmap v1074: 305 pgs: 305 active+clean; 58 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 115 KiB/s wr, 13 op/s
Nov 29 05:39:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 58 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 115 KiB/s wr, 12 op/s
Nov 29 05:39:54 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:54 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 05:39:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:39:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 05:39:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:39:55 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d
Nov 29 05:39:55 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474/4b5f69bf-e0e1-4618-a5c9-3013324a337d],prefix=session evict} (starting...)
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 05:39:55 compute-0 ceph-mon[75176]: pgmap v1075: 305 pgs: 305 active+clean; 58 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 115 KiB/s wr, 12 op/s
Nov 29 05:39:55 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:55 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:39:55 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:39:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:39:55 compute-0 sshd-session[266795]: Invalid user david from 45.120.216.232 port 33246
Nov 29 05:39:55 compute-0 podman[266798]: 2025-11-29 05:39:55.534053212 +0000 UTC m=+0.064929405 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "format": "json"}]: dispatch
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:55 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:55.640+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b16d258-3e4e-4612-860f-4a4dc4e6aef6' of type subvolume
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b16d258-3e4e-4612-860f-4a4dc4e6aef6' of type subvolume
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5b16d258-3e4e-4612-860f-4a4dc4e6aef6'' moved to trashcan
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b16d258-3e4e-4612-860f-4a4dc4e6aef6, vol_name:cephfs) < ""
Nov 29 05:39:55 compute-0 sshd-session[266795]: Received disconnect from 45.120.216.232 port 33246:11: Bye Bye [preauth]
Nov 29 05:39:55 compute-0 sshd-session[266795]: Disconnected from invalid user david 45.120.216.232 port 33246 [preauth]
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "format": "json"}]: dispatch
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:55 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:55.736+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dddb87ae-5fcb-4c01-90f6-c57d130f8474' of type subvolume
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dddb87ae-5fcb-4c01-90f6-c57d130f8474' of type subvolume
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dddb87ae-5fcb-4c01-90f6-c57d130f8474'' moved to trashcan
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:39:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dddb87ae-5fcb-4c01-90f6-c57d130f8474, vol_name:cephfs) < ""
Nov 29 05:39:56 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:39:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:39:56 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:56 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice_bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:39:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:56 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:56 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 58 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 150 KiB/s wr, 16 op/s
Nov 29 05:39:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:39:56 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:56 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:56 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:56 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:56 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "format": "json"}]: dispatch
Nov 29 05:39:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b16d258-3e4e-4612-860f-4a4dc4e6aef6", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "format": "json"}]: dispatch
Nov 29 05:39:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dddb87ae-5fcb-4c01-90f6-c57d130f8474", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:39:57 compute-0 ceph-mon[75176]: pgmap v1076: 305 pgs: 305 active+clean; 58 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 150 KiB/s wr, 16 op/s
Nov 29 05:39:58 compute-0 sshd-session[266819]: Invalid user ubuntu from 61.240.213.113 port 53838
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 58 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 9 op/s
Nov 29 05:39:58 compute-0 sshd-session[266819]: Received disconnect from 61.240.213.113 port 53838:11:  [preauth]
Nov 29 05:39:58 compute-0 sshd-session[266819]: Disconnected from invalid user ubuntu 61.240.213.113 port 53838 [preauth]
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:39:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:39:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:58 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 05:39:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:39:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:58 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "format": "json"}]: dispatch
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4fc216c7-7565-440e-ba91-0a6f65473f45, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4fc216c7-7565-440e-ba91-0a6f65473f45, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:39:58 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:39:58.660+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4fc216c7-7565-440e-ba91-0a6f65473f45' of type subvolume
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4fc216c7-7565-440e-ba91-0a6f65473f45' of type subvolume
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "force": true, "format": "json"}]: dispatch
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4fc216c7-7565-440e-ba91-0a6f65473f45'' moved to trashcan
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:39:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4fc216c7-7565-440e-ba91-0a6f65473f45, vol_name:cephfs) < ""
Nov 29 05:39:59 compute-0 ceph-mon[75176]: pgmap v1077: 305 pgs: 305 active+clean; 58 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 9 op/s
Nov 29 05:39:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:39:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:39:59 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:39:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) v1
Nov 29 05:39:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) v1
Nov 29 05:39:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:39:59 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:39:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:39:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:39:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:39:59 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:39:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:39:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 59 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 159 KiB/s wr, 17 op/s
Nov 29 05:40:00 compute-0 podman[266822]: 2025-11-29 05:40:00.087734773 +0000 UTC m=+0.132602166 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:40:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "format": "json"}]: dispatch
Nov 29 05:40:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4fc216c7-7565-440e-ba91-0a6f65473f45", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:00 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice_bob", "format": "json"}]: dispatch
Nov 29 05:40:00 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice_bob"}]: dispatch
Nov 29 05:40:00 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Nov 29 05:40:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:40:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice_bob", "format": "json"}]: dispatch
Nov 29 05:40:01 compute-0 ceph-mon[75176]: pgmap v1078: 305 pgs: 305 active+clean; 59 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 159 KiB/s wr, 17 op/s
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 59 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 109 KiB/s wr, 11 op/s
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "format": "json"}]: dispatch
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:02 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:02.335+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee8187c1-56b3-4603-8456-6c0a4e9f03fd' of type subvolume
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee8187c1-56b3-4603-8456-6c0a4e9f03fd' of type subvolume
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee8187c1-56b3-4603-8456-6c0a4e9f03fd'' moved to trashcan
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee8187c1-56b3-4603-8456-6c0a4e9f03fd, vol_name:cephfs) < ""
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:40:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 05:40:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:40:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6
Nov 29 05:40:02 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6],prefix=session evict} (starting...)
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:40:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:40:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:40:02 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:40:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:40:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:02 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:02 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:40:03 compute-0 ceph-mon[75176]: pgmap v1079: 305 pgs: 305 active+clean; 59 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 109 KiB/s wr, 11 op/s
Nov 29 05:40:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:40:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:40:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:40:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 59 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 109 KiB/s wr, 11 op/s
Nov 29 05:40:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "format": "json"}]: dispatch
Nov 29 05:40:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee8187c1-56b3-4603-8456-6c0a4e9f03fd", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:05 compute-0 ceph-mon[75176]: pgmap v1080: 305 pgs: 305 active+clean; 59 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 109 KiB/s wr, 11 op/s
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:40:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:40:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:06 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 05:40:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:40:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 151 KiB/s wr, 15 op/s
Nov 29 05:40:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:06 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:06 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:06 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:06 compute-0 ceph-mon[75176]: pgmap v1081: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 151 KiB/s wr, 15 op/s
Nov 29 05:40:06 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:40:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:40:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 05:40:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:40:06 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:40:06 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:40:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:07 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:40:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:40:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:40:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:40:07 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:40:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 117 KiB/s wr, 11 op/s
Nov 29 05:40:08 compute-0 ceph-mon[75176]: pgmap v1082: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 117 KiB/s wr, 11 op/s
Nov 29 05:40:09 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:40:09.050 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:40:09 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:40:09.051 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 05:40:09 compute-0 sudo[266850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:09 compute-0 sudo[266850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:09 compute-0 sudo[266850]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:09 compute-0 sudo[266875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:40:09 compute-0 sudo[266875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:09 compute-0 sudo[266875]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:09 compute-0 sudo[266900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:09 compute-0 sudo[266900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:09 compute-0 sudo[266900]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:10 compute-0 sudo[266925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 05:40:10 compute-0 sudo[266925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 149 KiB/s wr, 16 op/s
Nov 29 05:40:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:40:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 05:40:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:40:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6
Nov 29 05:40:10 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6],prefix=session evict} (starting...)
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:40:10 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:40:10 compute-0 sudo[266925]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:10 compute-0 podman[266966]: 2025-11-29 05:40:10.268450757 +0000 UTC m=+0.052925787 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:40:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:40:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:40:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:10 compute-0 sudo[266992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:10 compute-0 sudo[266992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:10 compute-0 sudo[266992]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:40:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:40:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:40:10 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID alice bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:40:10 compute-0 sudo[267017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:40:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:40:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:10 compute-0 sudo[267017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:10 compute-0 sudo[267017]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:40:10 compute-0 sudo[267042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:10 compute-0 sudo[267042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:10 compute-0 sudo[267042]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:10 compute-0 sudo[267067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:40:10 compute-0 sudo[267067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:11 compute-0 sudo[267067]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:11 compute-0 sudo[267123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:11 compute-0 sudo[267123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:11 compute-0 sudo[267123]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:11 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:11 compute-0 ceph-mon[75176]: pgmap v1083: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 149 KiB/s wr, 16 op/s
Nov 29 05:40:11 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:40:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow r pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:11 compute-0 sudo[267148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:40:11 compute-0 sudo[267148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:11 compute-0 sudo[267148]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:11 compute-0 sudo[267173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:11 compute-0 sudo[267173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:11 compute-0 sudo[267173]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:11 compute-0 sudo[267198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- inventory --format=json-pretty --filter-for-batch
Nov 29 05:40:11 compute-0 sudo[267198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:40:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:40:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:40:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:40:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:40:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:40:11 compute-0 podman[267261]: 2025-11-29 05:40:11.649281687 +0000 UTC m=+0.037485165 container create c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 05:40:11 compute-0 systemd[1]: Started libpod-conmon-c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a.scope.
Nov 29 05:40:11 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:40:11 compute-0 podman[267261]: 2025-11-29 05:40:11.631485758 +0000 UTC m=+0.019689276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:40:11 compute-0 podman[267261]: 2025-11-29 05:40:11.732583724 +0000 UTC m=+0.120787222 container init c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 05:40:11 compute-0 podman[267261]: 2025-11-29 05:40:11.739110042 +0000 UTC m=+0.127313520 container start c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:40:11 compute-0 podman[267261]: 2025-11-29 05:40:11.742284798 +0000 UTC m=+0.130488296 container attach c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 05:40:11 compute-0 naughty_kare[267278]: 167 167
Nov 29 05:40:11 compute-0 systemd[1]: libpod-c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a.scope: Deactivated successfully.
Nov 29 05:40:11 compute-0 conmon[267278]: conmon c6e8ed5b8f8432d662e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a.scope/container/memory.events
Nov 29 05:40:11 compute-0 podman[267261]: 2025-11-29 05:40:11.745715461 +0000 UTC m=+0.133918939 container died c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:40:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-935431c4a73898ae19faf92d8545da1655cd7b130c475a366114d8976bcc7546-merged.mount: Deactivated successfully.
Nov 29 05:40:11 compute-0 podman[267261]: 2025-11-29 05:40:11.778508161 +0000 UTC m=+0.166711639 container remove c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:40:11 compute-0 systemd[1]: libpod-conmon-c6e8ed5b8f8432d662e854774ecca78e5359fa70453adbd60540ce268bb64d4a.scope: Deactivated successfully.
Nov 29 05:40:11 compute-0 podman[267302]: 2025-11-29 05:40:11.963334266 +0000 UTC m=+0.043641513 container create 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:40:12 compute-0 systemd[1]: Started libpod-conmon-3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0.scope.
Nov 29 05:40:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a26240a89b145561c9d012eba790153dce1eadca4422f35b33d7f9376974d6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a26240a89b145561c9d012eba790153dce1eadca4422f35b33d7f9376974d6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a26240a89b145561c9d012eba790153dce1eadca4422f35b33d7f9376974d6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a26240a89b145561c9d012eba790153dce1eadca4422f35b33d7f9376974d6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:12 compute-0 podman[267302]: 2025-11-29 05:40:12.031502099 +0000 UTC m=+0.111809356 container init 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:40:12 compute-0 podman[267302]: 2025-11-29 05:40:11.941832118 +0000 UTC m=+0.022139395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:40:12 compute-0 podman[267302]: 2025-11-29 05:40:12.03734708 +0000 UTC m=+0.117654317 container start 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 05:40:12 compute-0 podman[267302]: 2025-11-29 05:40:12.040558937 +0000 UTC m=+0.120866154 container attach 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 05:40:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 74 KiB/s wr, 8 op/s
Nov 29 05:40:12 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "r", "format": "json"}]: dispatch
Nov 29 05:40:13 compute-0 ceph-mon[75176]: pgmap v1084: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 74 KiB/s wr, 8 op/s
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]: [
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:     {
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:         "available": false,
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:         "ceph_device": false,
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:         "lsm_data": {},
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:         "lvs": [],
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:         "path": "/dev/sr0",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:         "rejected_reasons": [
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "Insufficient space (<5GB)",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "Has a FileSystem"
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:         ],
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:         "sys_api": {
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "actuators": null,
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "device_nodes": "sr0",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "devname": "sr0",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "human_readable_size": "482.00 KB",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "id_bus": "ata",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "model": "QEMU DVD-ROM",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "nr_requests": "2",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "parent": "/dev/sr0",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "partitions": {},
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "path": "/dev/sr0",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "removable": "1",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "rev": "2.5+",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "ro": "0",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "rotational": "1",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "sas_address": "",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "sas_device_handle": "",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "scheduler_mode": "mq-deadline",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "sectors": 0,
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "sectorsize": "2048",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "size": 493568.0,
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "support_discard": "2048",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "type": "disk",
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:             "vendor": "QEMU"
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:         }
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]:     }
Nov 29 05:40:13 compute-0 dreamy_torvalds[267319]: ]
Nov 29 05:40:13 compute-0 systemd[1]: libpod-3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0.scope: Deactivated successfully.
Nov 29 05:40:13 compute-0 systemd[1]: libpod-3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0.scope: Consumed 1.401s CPU time.
Nov 29 05:40:13 compute-0 conmon[267319]: conmon 3ae85c4f40ff48139a75 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0.scope/container/memory.events
Nov 29 05:40:13 compute-0 podman[267302]: 2025-11-29 05:40:13.383369031 +0000 UTC m=+1.463676248 container died 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:40:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a26240a89b145561c9d012eba790153dce1eadca4422f35b33d7f9376974d6a-merged.mount: Deactivated successfully.
Nov 29 05:40:13 compute-0 podman[267302]: 2025-11-29 05:40:13.433153751 +0000 UTC m=+1.513460978 container remove 3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:40:13 compute-0 systemd[1]: libpod-conmon-3ae85c4f40ff48139a75c76b8ef77cf0236a1919e0edd9aa10712b02e08591b0.scope: Deactivated successfully.
Nov 29 05:40:13 compute-0 sudo[267198]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:40:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:40:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:40:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:40:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:40:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:40:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:40:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:13 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 3d348d28-eb9e-427c-a8ec-6083bfe53d55 does not exist
Nov 29 05:40:13 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d5bc9f56-9610-499b-8eec-d6f6d8ec10e8 does not exist
Nov 29 05:40:13 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 60d4e170-cf75-4535-8d35-cdd582eefec5 does not exist
Nov 29 05:40:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:40:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:40:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:40:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:40:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:40:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:40:13 compute-0 sudo[269499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:13 compute-0 sudo[269499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:13 compute-0 sudo[269499]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:13 compute-0 sudo[269524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:40:13 compute-0 sudo[269524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:13 compute-0 sudo[269524]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:13 compute-0 sudo[269549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:13 compute-0 sudo[269549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:13 compute-0 sudo[269549]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:13 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:40:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:40:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:13 compute-0 sudo[269574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:40:13 compute-0 sudo[269574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:13 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 05:40:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:40:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:13 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:40:13.755 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:40:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:40:13.756 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:40:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:40:13.756 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:40:13 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:40:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) v1
Nov 29 05:40:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) v1
Nov 29 05:40:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:40:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:40:14 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:40:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:40:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:14 compute-0 podman[269639]: 2025-11-29 05:40:14.06670888 +0000 UTC m=+0.042601307 container create eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:40:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 74 KiB/s wr, 8 op/s
Nov 29 05:40:14 compute-0 systemd[1]: Started libpod-conmon-eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236.scope.
Nov 29 05:40:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:40:14 compute-0 podman[269639]: 2025-11-29 05:40:14.142517698 +0000 UTC m=+0.118410145 container init eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:40:14 compute-0 podman[269639]: 2025-11-29 05:40:14.048772429 +0000 UTC m=+0.024664866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:40:14 compute-0 podman[269639]: 2025-11-29 05:40:14.149861825 +0000 UTC m=+0.125754252 container start eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 29 05:40:14 compute-0 podman[269639]: 2025-11-29 05:40:14.153343498 +0000 UTC m=+0.129235925 container attach eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:40:14 compute-0 hardcore_carver[269656]: 167 167
Nov 29 05:40:14 compute-0 systemd[1]: libpod-eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236.scope: Deactivated successfully.
Nov 29 05:40:14 compute-0 podman[269639]: 2025-11-29 05:40:14.154760662 +0000 UTC m=+0.130653089 container died eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:40:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b768e45ff28ae80d1be567b649db7070b848ef4de6d78152c73c9ac3c6a5004d-merged.mount: Deactivated successfully.
Nov 29 05:40:14 compute-0 podman[269639]: 2025-11-29 05:40:14.196046918 +0000 UTC m=+0.171939345 container remove eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_carver, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:40:14 compute-0 systemd[1]: libpod-conmon-eb066974cac88224ecfda9c1fd3c8a7d4dc3ca0254a09971a6105e9341bb9236.scope: Deactivated successfully.
Nov 29 05:40:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:40:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1262537410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:40:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1262537410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:40:14 compute-0 podman[269678]: 2025-11-29 05:40:14.363596716 +0000 UTC m=+0.059145077 container create 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:40:14 compute-0 systemd[1]: Started libpod-conmon-2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7.scope.
Nov 29 05:40:14 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:40:14 compute-0 podman[269678]: 2025-11-29 05:40:14.335969061 +0000 UTC m=+0.031517502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:14 compute-0 podman[269678]: 2025-11-29 05:40:14.458683918 +0000 UTC m=+0.154232289 container init 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:40:14 compute-0 podman[269678]: 2025-11-29 05:40:14.473974836 +0000 UTC m=+0.169523187 container start 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.alice bob", "format": "json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.alice bob"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "alice bob", "format": "json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: pgmap v1085: 305 pgs: 305 active+clean; 60 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 74 KiB/s wr, 8 op/s
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1262537410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:40:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1262537410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:40:14 compute-0 podman[269678]: 2025-11-29 05:40:14.477230025 +0000 UTC m=+0.172778376 container attach 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 05:40:15 compute-0 objective_germain[269694]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:40:15 compute-0 objective_germain[269694]: --> relative data size: 1.0
Nov 29 05:40:15 compute-0 objective_germain[269694]: --> All data devices are unavailable
Nov 29 05:40:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:15 compute-0 systemd[1]: libpod-2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7.scope: Deactivated successfully.
Nov 29 05:40:15 compute-0 podman[269678]: 2025-11-29 05:40:15.527733184 +0000 UTC m=+1.223281535 container died 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:40:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-445529c81b940d6a5f43460b26348e2d792ba4f4940167bc5911aee66768c0bc-merged.mount: Deactivated successfully.
Nov 29 05:40:15 compute-0 podman[269678]: 2025-11-29 05:40:15.579567523 +0000 UTC m=+1.275115874 container remove 2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:40:15 compute-0 systemd[1]: libpod-conmon-2b4903a6a0baffc711ac8c68f616fa298aa54dcd438d437b2cea156d058d13a7.scope: Deactivated successfully.
Nov 29 05:40:15 compute-0 sudo[269574]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:15 compute-0 sudo[269735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:15 compute-0 sudo[269735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:15 compute-0 sudo[269735]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:15 compute-0 sudo[269760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:40:15 compute-0 sudo[269760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:15 compute-0 sudo[269760]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:15 compute-0 sudo[269785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:15 compute-0 sudo[269785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:15 compute-0 sudo[269785]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:15 compute-0 sudo[269810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:40:15 compute-0 sudo[269810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 131 KiB/s wr, 15 op/s
Nov 29 05:40:16 compute-0 podman[269879]: 2025-11-29 05:40:16.2125919 +0000 UTC m=+0.034347739 container create a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:40:16 compute-0 systemd[1]: Started libpod-conmon-a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66.scope.
Nov 29 05:40:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:40:16 compute-0 podman[269879]: 2025-11-29 05:40:16.29183168 +0000 UTC m=+0.113587549 container init a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:40:16 compute-0 podman[269879]: 2025-11-29 05:40:16.198256024 +0000 UTC m=+0.020011873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:40:16 compute-0 podman[269879]: 2025-11-29 05:40:16.302418595 +0000 UTC m=+0.124174434 container start a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 05:40:16 compute-0 gracious_vaughan[269895]: 167 167
Nov 29 05:40:16 compute-0 podman[269879]: 2025-11-29 05:40:16.306134735 +0000 UTC m=+0.127890574 container attach a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:40:16 compute-0 systemd[1]: libpod-a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66.scope: Deactivated successfully.
Nov 29 05:40:16 compute-0 podman[269879]: 2025-11-29 05:40:16.30759706 +0000 UTC m=+0.129352899 container died a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 29 05:40:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c0499c7bf4d6e392645418e2a70e92b06a3f9337d5a9533950f01744cd9591c-merged.mount: Deactivated successfully.
Nov 29 05:40:16 compute-0 podman[269879]: 2025-11-29 05:40:16.346893927 +0000 UTC m=+0.168649776 container remove a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 05:40:16 compute-0 systemd[1]: libpod-conmon-a11b26d2bf9935a4b5848b16daa5dbee36f4f127a2fc60233bda3e0f92b7dd66.scope: Deactivated successfully.
Nov 29 05:40:16 compute-0 podman[269918]: 2025-11-29 05:40:16.508982624 +0000 UTC m=+0.049026703 container create 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:40:16 compute-0 systemd[1]: Started libpod-conmon-92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1.scope.
Nov 29 05:40:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:40:16 compute-0 podman[269918]: 2025-11-29 05:40:16.482472935 +0000 UTC m=+0.022517004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5c2a032eef554ced966c7a5175ddb8011467f34048d2b52f7317dfe238392/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5c2a032eef554ced966c7a5175ddb8011467f34048d2b52f7317dfe238392/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5c2a032eef554ced966c7a5175ddb8011467f34048d2b52f7317dfe238392/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83f5c2a032eef554ced966c7a5175ddb8011467f34048d2b52f7317dfe238392/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:16 compute-0 podman[269918]: 2025-11-29 05:40:16.594134906 +0000 UTC m=+0.134178995 container init 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:40:16 compute-0 podman[269918]: 2025-11-29 05:40:16.604051765 +0000 UTC m=+0.144095854 container start 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:40:16 compute-0 podman[269918]: 2025-11-29 05:40:16.607510808 +0000 UTC m=+0.147554897 container attach 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 05:40:17 compute-0 ceph-mon[75176]: pgmap v1086: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 131 KiB/s wr, 15 op/s
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]: {
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:     "0": [
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:         {
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "devices": [
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "/dev/loop3"
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             ],
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_name": "ceph_lv0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_size": "21470642176",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "name": "ceph_lv0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "tags": {
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.cluster_name": "ceph",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.crush_device_class": "",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.encrypted": "0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.osd_id": "0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.type": "block",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.vdo": "0"
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             },
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "type": "block",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "vg_name": "ceph_vg0"
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:         }
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:     ],
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:     "1": [
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:         {
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "devices": [
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "/dev/loop4"
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             ],
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_name": "ceph_lv1",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_size": "21470642176",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "name": "ceph_lv1",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "tags": {
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.cluster_name": "ceph",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.crush_device_class": "",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.encrypted": "0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.osd_id": "1",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.type": "block",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.vdo": "0"
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             },
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "type": "block",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "vg_name": "ceph_vg1"
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:         }
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:     ],
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:     "2": [
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:         {
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "devices": [
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "/dev/loop5"
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             ],
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_name": "ceph_lv2",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_size": "21470642176",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "name": "ceph_lv2",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "tags": {
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.cluster_name": "ceph",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.crush_device_class": "",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.encrypted": "0",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.osd_id": "2",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.type": "block",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:                 "ceph.vdo": "0"
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             },
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "type": "block",
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:             "vg_name": "ceph_vg2"
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:         }
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]:     ]
Nov 29 05:40:17 compute-0 hardcore_perlman[269934]: }
Nov 29 05:40:17 compute-0 systemd[1]: libpod-92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1.scope: Deactivated successfully.
Nov 29 05:40:17 compute-0 podman[269918]: 2025-11-29 05:40:17.415898161 +0000 UTC m=+0.955942250 container died 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 05:40:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-83f5c2a032eef554ced966c7a5175ddb8011467f34048d2b52f7317dfe238392-merged.mount: Deactivated successfully.
Nov 29 05:40:17 compute-0 podman[269918]: 2025-11-29 05:40:17.4805579 +0000 UTC m=+1.020601939 container remove 92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:40:17 compute-0 systemd[1]: libpod-conmon-92f1f98c9e57d2cf5ee40a6109d3dbebe9ff1b013b7a501a22d8e7ea869a00b1.scope: Deactivated successfully.
Nov 29 05:40:17 compute-0 sudo[269810]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:40:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 29 05:40:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 05:40:17 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID bob with tenant 5dd620782ecb48b9af309e8bc536acb2
Nov 29 05:40:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:40:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:17 compute-0 sudo[269957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:17 compute-0 sudo[269957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:17 compute-0 sudo[269957]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:40:17 compute-0 sudo[269982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:40:17 compute-0 sudo[269982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:17 compute-0 sudo[269982]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:17 compute-0 sudo[270007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:17 compute-0 sudo[270007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:17 compute-0 sudo[270007]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:40:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 05:40:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:40:17 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:40:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:17 compute-0 sudo[270032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:40:17 compute-0 sudo[270032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6
Nov 29 05:40:17 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6],prefix=session evict} (starting...)
Nov 29 05:40:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:40:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 89 KiB/s wr, 10 op/s
Nov 29 05:40:18 compute-0 podman[270098]: 2025-11-29 05:40:18.014183611 +0000 UTC m=+0.033009697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:40:18 compute-0 podman[270098]: 2025-11-29 05:40:18.169964166 +0000 UTC m=+0.188790232 container create fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 05:40:18 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 05:40:18 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:18 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:18 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:18 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:40:18 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:40:18 compute-0 systemd[1]: Started libpod-conmon-fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8.scope.
Nov 29 05:40:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:40:18 compute-0 podman[270098]: 2025-11-29 05:40:18.263581862 +0000 UTC m=+0.282408008 container init fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:40:18 compute-0 podman[270098]: 2025-11-29 05:40:18.2751007 +0000 UTC m=+0.293926766 container start fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:40:18 compute-0 xenodochial_zhukovsky[270115]: 167 167
Nov 29 05:40:18 compute-0 systemd[1]: libpod-fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8.scope: Deactivated successfully.
Nov 29 05:40:18 compute-0 podman[270098]: 2025-11-29 05:40:18.278488211 +0000 UTC m=+0.297314297 container attach fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:40:18 compute-0 podman[270098]: 2025-11-29 05:40:18.279776922 +0000 UTC m=+0.298602988 container died fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:40:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3226dc7d43f2a3e8009327f3f142e76e28090a4e53c74da84ac7970dfe634f66-merged.mount: Deactivated successfully.
Nov 29 05:40:18 compute-0 podman[270098]: 2025-11-29 05:40:18.313613178 +0000 UTC m=+0.332439234 container remove fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:40:18 compute-0 systemd[1]: libpod-conmon-fb3d7039f1d91e49be6220018d13669bcb6cc94e544b8dd5a7bd843a071b2ed8.scope: Deactivated successfully.
Nov 29 05:40:18 compute-0 podman[270139]: 2025-11-29 05:40:18.453167561 +0000 UTC m=+0.035260291 container create 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:40:18 compute-0 systemd[1]: Started libpod-conmon-5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471.scope.
Nov 29 05:40:18 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3f9346a2d40d4260f89f281b829d8be7389aa26cfc7d064c7d72250931de8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:18 compute-0 podman[270139]: 2025-11-29 05:40:18.437997595 +0000 UTC m=+0.020090345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3f9346a2d40d4260f89f281b829d8be7389aa26cfc7d064c7d72250931de8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3f9346a2d40d4260f89f281b829d8be7389aa26cfc7d064c7d72250931de8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb3f9346a2d40d4260f89f281b829d8be7389aa26cfc7d064c7d72250931de8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:40:18 compute-0 podman[270139]: 2025-11-29 05:40:18.551330217 +0000 UTC m=+0.133422977 container init 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 05:40:18 compute-0 podman[270139]: 2025-11-29 05:40:18.556220565 +0000 UTC m=+0.138313305 container start 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 05:40:18 compute-0 podman[270139]: 2025-11-29 05:40:18.559031513 +0000 UTC m=+0.141124283 container attach 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:40:19 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 05:40:19 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:19 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:40:19.053 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:40:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:19 compute-0 ceph-mon[75176]: pgmap v1087: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 89 KiB/s wr, 10 op/s
Nov 29 05:40:19 compute-0 trusting_hopper[270156]: {
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "osd_id": 0,
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "type": "bluestore"
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:     },
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "osd_id": 1,
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "type": "bluestore"
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:     },
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "osd_id": 2,
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:         "type": "bluestore"
Nov 29 05:40:19 compute-0 trusting_hopper[270156]:     }
Nov 29 05:40:19 compute-0 trusting_hopper[270156]: }
Nov 29 05:40:19 compute-0 systemd[1]: libpod-5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471.scope: Deactivated successfully.
Nov 29 05:40:19 compute-0 conmon[270156]: conmon 5fb0c829e88695c8deb1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471.scope/container/memory.events
Nov 29 05:40:19 compute-0 podman[270139]: 2025-11-29 05:40:19.530682362 +0000 UTC m=+1.112775102 container died 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:40:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb3f9346a2d40d4260f89f281b829d8be7389aa26cfc7d064c7d72250931de8f-merged.mount: Deactivated successfully.
Nov 29 05:40:19 compute-0 podman[270139]: 2025-11-29 05:40:19.592244115 +0000 UTC m=+1.174336855 container remove 5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 29 05:40:19 compute-0 systemd[1]: libpod-conmon-5fb0c829e88695c8deb1155523bd3c45b9459c1da80288042ccf40eb29652471.scope: Deactivated successfully.
Nov 29 05:40:19 compute-0 sudo[270032]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:40:19 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:40:19 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:19 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 2b960a0e-17cf-4744-b61d-22f67967eca6 does not exist
Nov 29 05:40:19 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev e47d686d-b5a6-4b50-993c-8415dbeb91db does not exist
Nov 29 05:40:19 compute-0 sudo[270204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:40:19 compute-0 sudo[270204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:19 compute-0 sudo[270204]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:19 compute-0 sudo[270229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:40:19 compute-0 sudo[270229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:40:19 compute-0 sudo[270229]: pam_unix(sudo:session): session closed for user root
Nov 29 05:40:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 120 KiB/s wr, 14 op/s
Nov 29 05:40:20 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 05:40:20 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:20 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:40:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.521937) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820521982, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1205, "num_deletes": 257, "total_data_size": 1289445, "memory_usage": 1320472, "flush_reason": "Manual Compaction"}
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820534015, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1252031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23524, "largest_seqno": 24728, "table_properties": {"data_size": 1246316, "index_size": 2855, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14382, "raw_average_key_size": 20, "raw_value_size": 1233799, "raw_average_value_size": 1745, "num_data_blocks": 127, "num_entries": 707, "num_filter_entries": 707, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394768, "oldest_key_time": 1764394768, "file_creation_time": 1764394820, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 12124 microseconds, and 4325 cpu microseconds.
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.534062) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1252031 bytes OK
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.534080) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.536010) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.536027) EVENT_LOG_v1 {"time_micros": 1764394820536021, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.536046) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1283333, prev total WAL file size 1283333, number of live WAL files 2.
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.537081) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373532' seq:0, type:0; will stop at (end)
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1222KB)], [53(8578KB)]
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820537154, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10035997, "oldest_snapshot_seqno": -1}
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5327 keys, 9940487 bytes, temperature: kUnknown
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820627997, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9940487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9901125, "index_size": 24916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 132423, "raw_average_key_size": 24, "raw_value_size": 9801742, "raw_average_value_size": 1840, "num_data_blocks": 1040, "num_entries": 5327, "num_filter_entries": 5327, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394820, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.628207) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9940487 bytes
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.695079) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.4 rd, 109.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.4 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(16.0) write-amplify(7.9) OK, records in: 5864, records dropped: 537 output_compression: NoCompression
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.695117) EVENT_LOG_v1 {"time_micros": 1764394820695099, "job": 28, "event": "compaction_finished", "compaction_time_micros": 90901, "compaction_time_cpu_micros": 41767, "output_level": 6, "num_output_files": 1, "total_output_size": 9940487, "num_input_records": 5864, "num_output_records": 5327, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820695767, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394820698776, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.536950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.698881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.698888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.698891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.698894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:40:20 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:40:20.698897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:40:21 compute-0 ceph-mon[75176]: pgmap v1088: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 120 KiB/s wr, 14 op/s
Nov 29 05:40:21 compute-0 sshd-session[270254]: Invalid user cc from 152.32.145.111 port 37036
Nov 29 05:40:21 compute-0 sshd-session[270254]: Received disconnect from 152.32.145.111 port 37036:11: Bye Bye [preauth]
Nov 29 05:40:21 compute-0 sshd-session[270254]: Disconnected from invalid user cc 152.32.145.111 port 37036 [preauth]
Nov 29 05:40:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 10 op/s
Nov 29 05:40:22 compute-0 ceph-mon[75176]: pgmap v1089: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 10 op/s
Nov 29 05:40:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:22 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 05:40:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:40:23 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/.meta.tmp'
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/.meta.tmp' to config b'/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/.meta'
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:40:23 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:40:23 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-887052356 with tenant a05f740db7b94303aac90d6f217f853a
Nov 29 05:40:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:40:23 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume authorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, tenant_id:a05f740db7b94303aac90d6f217f853a, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/70cb9e84-4e7b-4e83-b5ff-872d8a0e3944'' moved to trashcan
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:40:23 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:70cb9e84-4e7b-4e83-b5ff-872d8a0e3944, vol_name:cephfs) < ""
Nov 29 05:40:23 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-887052356", "caps": ["mds", "allow rw path=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a98b9fa5-d939-4fac-9215-346a94abca4f", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 10 op/s
Nov 29 05:40:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"} v 0) v1
Nov 29 05:40:24 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"} v 0) v1
Nov 29 05:40:24 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:40:24 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:40:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume deauthorize, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-887052356, client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6
Nov 29 05:40:24 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-887052356,client_metadata.root=/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f/1ddd10c5-e064-4e1d-82bc-8b2f4ca83ca6],prefix=session evict} (starting...)
Nov 29 05:40:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:40:24 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-887052356, format:json, prefix:fs subvolume evict, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "tenant_id": "a05f740db7b94303aac90d6f217f853a", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "format": "json"}]: dispatch
Nov 29 05:40:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "70cb9e84-4e7b-4e83-b5ff-872d8a0e3944", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:24 compute-0 ceph-mon[75176]: pgmap v1090: 305 pgs: 305 active+clean; 61 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 10 op/s
Nov 29 05:40:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]: dispatch
Nov 29 05:40:24 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-887052356"}]': finished
Nov 29 05:40:25 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "auth_id": "bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:40:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 29 05:40:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 05:40:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f,allow rw path=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e71bf388-0320-44c7-80f4-31f36b232ca1"]} v 0) v1
Nov 29 05:40:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f,allow rw path=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e71bf388-0320-44c7-80f4-31f36b232ca1"]}]: dispatch
Nov 29 05:40:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f,allow rw path=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e71bf388-0320-44c7-80f4-31f36b232ca1"]}]': finished
Nov 29 05:40:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 29 05:40:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 05:40:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, tenant_id:5dd620782ecb48b9af309e8bc536acb2, vol_name:cephfs) < ""
Nov 29 05:40:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:25 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:25 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "auth_id": "tempest-cephx-id-887052356", "format": "json"}]: dispatch
Nov 29 05:40:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 05:40:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f,allow rw path=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e71bf388-0320-44c7-80f4-31f36b232ca1"]}]: dispatch
Nov 29 05:40:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f,allow rw path=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1,allow rw pool=cephfs.cephfs.data namespace=fsvolumens_e71bf388-0320-44c7-80f4-31f36b232ca1"]}]': finished
Nov 29 05:40:25 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 05:40:26 compute-0 podman[270257]: 2025-11-29 05:40:26.06785121 +0000 UTC m=+0.104783417 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:40:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 61 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 122 KiB/s wr, 13 op/s
Nov 29 05:40:26 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "auth_id": "bob", "tenant_id": "5dd620782ecb48b9af309e8bc536acb2", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:26 compute-0 ceph-mon[75176]: pgmap v1091: 305 pgs: 305 active+clean; 61 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 122 KiB/s wr, 13 op/s
Nov 29 05:40:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db_9d58da62-529e-4378-9a77-682165217cf5", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db_9d58da62-529e-4378-9a77-682165217cf5, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:40:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp'
Nov 29 05:40:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp' to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta'
Nov 29 05:40:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db_9d58da62-529e-4378-9a77-682165217cf5, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:40:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:40:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp'
Nov 29 05:40:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta.tmp' to config b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d/.meta'
Nov 29 05:40:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:54db2b9e-cb54-440e-8afd-6c23560987db, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:40:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 61 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 65 KiB/s wr, 6 op/s
Nov 29 05:40:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp'
Nov 29 05:40:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp' to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta'
Nov 29 05:40:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "format": "json"}]: dispatch
Nov 29 05:40:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:40:28 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 05:40:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 29 05:40:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1"]} v 0) v1
Nov 29 05:40:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1"]}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1"]}]': finished
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49
Nov 29 05:40:29 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/e71bf388-0320-44c7-80f4-31f36b232ca1/7339c488-5326-4f67-ad2f-b921ebea9d49],prefix=session evict} (starting...)
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:e71bf388-0320-44c7-80f4-31f36b232ca1, vol_name:cephfs) < ""
Nov 29 05:40:29 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db_9d58da62-529e-4378-9a77-682165217cf5", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "snap_name": "54db2b9e-cb54-440e-8afd-6c23560987db", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mon[75176]: pgmap v1092: 305 pgs: 305 active+clean; 61 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 65 KiB/s wr, 6 op/s
Nov 29 05:40:29 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1"]}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_848ba3c8-c30f-497b-9372-9c6fce9360b1"]}]': finished
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "format": "json"}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a98b9fa5-d939-4fac-9215-346a94abca4f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a98b9fa5-d939-4fac-9215-346a94abca4f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:29 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:29.244+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a98b9fa5-d939-4fac-9215-346a94abca4f' of type subvolume
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a98b9fa5-d939-4fac-9215-346a94abca4f' of type subvolume
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a98b9fa5-d939-4fac-9215-346a94abca4f'' moved to trashcan
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:40:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a98b9fa5-d939-4fac-9215-346a94abca4f, vol_name:cephfs) < ""
Nov 29 05:40:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 129 KiB/s wr, 14 op/s
Nov 29 05:40:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "format": "json"}]: dispatch
Nov 29 05:40:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 05:40:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e71bf388-0320-44c7-80f4-31f36b232ca1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 05:40:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:30 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "format": "json"}]: dispatch
Nov 29 05:40:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:30 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:30.764+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '779d5f7d-4b59-47d7-ae31-6662b5ea257d' of type subvolume
Nov 29 05:40:30 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '779d5f7d-4b59-47d7-ae31-6662b5ea257d' of type subvolume
Nov 29 05:40:30 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:40:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/779d5f7d-4b59-47d7-ae31-6662b5ea257d'' moved to trashcan
Nov 29 05:40:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:40:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:779d5f7d-4b59-47d7-ae31-6662b5ea257d, vol_name:cephfs) < ""
Nov 29 05:40:30 compute-0 nova_compute[254898]: 2025-11-29 05:40:30.966 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:40:31 compute-0 podman[270279]: 2025-11-29 05:40:31.093040317 +0000 UTC m=+0.140435876 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 05:40:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "format": "json"}]: dispatch
Nov 29 05:40:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a98b9fa5-d939-4fac-9215-346a94abca4f", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:31 compute-0 ceph-mon[75176]: pgmap v1093: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 129 KiB/s wr, 14 op/s
Nov 29 05:40:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "snap_name": "da8025e3-cf4f-466c-b32d-deea84c459c8", "format": "json"}]: dispatch
Nov 29 05:40:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 97 KiB/s wr, 10 op/s
Nov 29 05:40:32 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "format": "json"}]: dispatch
Nov 29 05:40:32 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "779d5f7d-4b59-47d7-ae31-6662b5ea257d", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 05:40:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) v1
Nov 29 05:40:32 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 05:40:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) v1
Nov 29 05:40:32 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.bob"}]: dispatch
Nov 29 05:40:32 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Nov 29 05:40:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 05:40:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f
Nov 29 05:40:32 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1/b9af2a18-fc04-4d96-ba1b-197da2f0632f],prefix=session evict} (starting...)
Nov 29 05:40:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:40:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:33 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "snap_name": "da8025e3-cf4f-466c-b32d-deea84c459c8", "format": "json"}]: dispatch
Nov 29 05:40:33 compute-0 ceph-mon[75176]: pgmap v1094: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 97 KiB/s wr, 10 op/s
Nov 29 05:40:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bob", "format": "json"}]: dispatch
Nov 29 05:40:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.bob"}]: dispatch
Nov 29 05:40:33 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Nov 29 05:40:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 29 05:40:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 29 05:40:33 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 29 05:40:33 compute-0 nova_compute[254898]: 2025-11-29 05:40:33.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:40:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 117 KiB/s wr, 12 op/s
Nov 29 05:40:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 05:40:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "auth_id": "bob", "format": "json"}]: dispatch
Nov 29 05:40:34 compute-0 ceph-mon[75176]: osdmap e149: 3 total, 3 up, 3 in
Nov 29 05:40:34 compute-0 nova_compute[254898]: 2025-11-29 05:40:34.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:40:34 compute-0 nova_compute[254898]: 2025-11-29 05:40:34.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:40:34 compute-0 nova_compute[254898]: 2025-11-29 05:40:34.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:40:34 compute-0 nova_compute[254898]: 2025-11-29 05:40:34.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:40:35 compute-0 ceph-mon[75176]: pgmap v1096: 305 pgs: 305 active+clean; 62 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 117 KiB/s wr, 12 op/s
Nov 29 05:40:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:35 compute-0 nova_compute[254898]: 2025-11-29 05:40:35.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:40:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 124 KiB/s wr, 14 op/s
Nov 29 05:40:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "format": "json"}]: dispatch
Nov 29 05:40:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:36 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:36.335+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '848ba3c8-c30f-497b-9372-9c6fce9360b1' of type subvolume
Nov 29 05:40:36 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '848ba3c8-c30f-497b-9372-9c6fce9360b1' of type subvolume
Nov 29 05:40:36 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/848ba3c8-c30f-497b-9372-9c6fce9360b1'' moved to trashcan
Nov 29 05:40:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:40:36 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:848ba3c8-c30f-497b-9372-9c6fce9360b1, vol_name:cephfs) < ""
Nov 29 05:40:36 compute-0 nova_compute[254898]: 2025-11-29 05:40:36.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:40:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "snap_name": "da8025e3-cf4f-466c-b32d-deea84c459c8_88f00d98-7609-4a65-a545-d92208fb556e", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8_88f00d98-7609-4a65-a545-d92208fb556e, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp'
Nov 29 05:40:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp' to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta'
Nov 29 05:40:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8_88f00d98-7609-4a65-a545-d92208fb556e, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "snap_name": "da8025e3-cf4f-466c-b32d-deea84c459c8", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp'
Nov 29 05:40:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta.tmp' to config b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f/.meta'
Nov 29 05:40:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da8025e3-cf4f-466c-b32d-deea84c459c8, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:37 compute-0 ceph-mon[75176]: pgmap v1097: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 124 KiB/s wr, 14 op/s
Nov 29 05:40:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 124 KiB/s wr, 14 op/s
Nov 29 05:40:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "format": "json"}]: dispatch
Nov 29 05:40:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "848ba3c8-c30f-497b-9372-9c6fce9360b1", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "snap_name": "da8025e3-cf4f-466c-b32d-deea84c459c8_88f00d98-7609-4a65-a545-d92208fb556e", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "snap_name": "da8025e3-cf4f-466c-b32d-deea84c459c8", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:38 compute-0 ceph-mon[75176]: pgmap v1098: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 124 KiB/s wr, 14 op/s
Nov 29 05:40:38 compute-0 nova_compute[254898]: 2025-11-29 05:40:38.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:40:38 compute-0 nova_compute[254898]: 2025-11-29 05:40:38.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:40:38 compute-0 nova_compute[254898]: 2025-11-29 05:40:38.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:40:38 compute-0 nova_compute[254898]: 2025-11-29 05:40:38.973 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:40:38 compute-0 nova_compute[254898]: 2025-11-29 05:40:38.973 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.004 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.005 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.005 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.005 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.006 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:40:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:40:39 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2558097515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.404 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:40:39 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2558097515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.583 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.584 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5104MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.584 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.584 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.665 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.665 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:40:39 compute-0 nova_compute[254898]: 2025-11-29 05:40:39.688 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:40:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:40:40 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3391827870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:40:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 70 KiB/s wr, 8 op/s
Nov 29 05:40:40 compute-0 nova_compute[254898]: 2025-11-29 05:40:40.104 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:40:40 compute-0 nova_compute[254898]: 2025-11-29 05:40:40.109 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:40:40 compute-0 nova_compute[254898]: 2025-11-29 05:40:40.129 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:40:40 compute-0 nova_compute[254898]: 2025-11-29 05:40:40.130 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:40:40 compute-0 nova_compute[254898]: 2025-11-29 05:40:40.130 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:40:40 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3391827870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:40:40 compute-0 ceph-mon[75176]: pgmap v1099: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 70 KiB/s wr, 8 op/s
Nov 29 05:40:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 29 05:40:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 29 05:40:40 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 29 05:40:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "format": "json"}]: dispatch
Nov 29 05:40:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:40:40 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:40.940+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '163fafb9-e2a0-4bac-af62-6ce4faca289f' of type subvolume
Nov 29 05:40:40 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '163fafb9-e2a0-4bac-af62-6ce4faca289f' of type subvolume
Nov 29 05:40:40 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/163fafb9-e2a0-4bac-af62-6ce4faca289f'' moved to trashcan
Nov 29 05:40:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:40:40 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:163fafb9-e2a0-4bac-af62-6ce4faca289f, vol_name:cephfs) < ""
Nov 29 05:40:41 compute-0 podman[270350]: 2025-11-29 05:40:41.006321323 +0000 UTC m=+0.050011136 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:40:41 compute-0 nova_compute[254898]: 2025-11-29 05:40:41.127 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/.meta.tmp'
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/.meta.tmp' to config b'/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/.meta'
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "format": "json"}]: dispatch
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:40:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:40:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:40:41
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['backups', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.data']
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f98b8ee0>)]
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 05:40:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 29 05:40:41 compute-0 ceph-mon[75176]: osdmap e150: 3 total, 3 up, 3 in
Nov 29 05:40:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "format": "json"}]: dispatch
Nov 29 05:40:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "163fafb9-e2a0-4bac-af62-6ce4faca289f", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "format": "json"}]: dispatch
Nov 29 05:40:41 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 29 05:40:41 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:40:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:40:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 88 KiB/s wr, 10 op/s
Nov 29 05:40:42 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.csskcz(active, since 32m)
Nov 29 05:40:42 compute-0 ceph-mon[75176]: osdmap e151: 3 total, 3 up, 3 in
Nov 29 05:40:42 compute-0 ceph-mon[75176]: pgmap v1102: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 88 KiB/s wr, 10 op/s
Nov 29 05:40:43 compute-0 ceph-mon[75176]: mgrmap e15: compute-0.csskcz(active, since 32m)
Nov 29 05:40:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:40:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:40:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "format": "json"}]: dispatch
Nov 29 05:40:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:40:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 27 KiB/s wr, 3 op/s
Nov 29 05:40:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "format": "json"}]: dispatch
Nov 29 05:40:44 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:44 compute-0 ceph-mon[75176]: pgmap v1103: 305 pgs: 305 active+clean; 63 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 27 KiB/s wr, 3 op/s
Nov 29 05:40:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 05:40:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/.meta.tmp'
Nov 29 05:40:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/.meta.tmp' to config b'/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/.meta'
Nov 29 05:40:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 05:40:44 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "format": "json"}]: dispatch
Nov 29 05:40:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 05:40:44 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 05:40:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:40:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "auth_id": "Joe", "tenant_id": "4e135fffa1e64bf8b2e43bd33b51cf15", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 05:40:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 29 05:40:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 05:40:45 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID Joe with tenant 4e135fffa1e64bf8b2e43bd33b51cf15
Nov 29 05:40:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a0e01f60-977a-4212-be2c-851b3318eb22", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:40:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a0e01f60-977a-4212-be2c-851b3318eb22", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:45 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a0e01f60-977a-4212-be2c-851b3318eb22", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 05:40:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:45 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "format": "json"}]: dispatch
Nov 29 05:40:45 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 05:40:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a0e01f60-977a-4212-be2c-851b3318eb22", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_a0e01f60-977a-4212-be2c-851b3318eb22", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 84 KiB/s wr, 97 op/s
Nov 29 05:40:46 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "auth_id": "Joe", "tenant_id": "4e135fffa1e64bf8b2e43bd33b51cf15", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:46 compute-0 ceph-mon[75176]: pgmap v1104: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 84 KiB/s wr, 97 op/s
Nov 29 05:40:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "22583c21-c0dc-4991-a17b-a735e6d7c9f4", "format": "json"}]: dispatch
Nov 29 05:40:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:47 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 56 KiB/s wr, 93 op/s
Nov 29 05:40:48 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "22583c21-c0dc-4991-a17b-a735e6d7c9f4", "format": "json"}]: dispatch
Nov 29 05:40:48 compute-0 ceph-mon[75176]: pgmap v1105: 305 pgs: 305 active+clean; 63 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 56 KiB/s wr, 93 op/s
Nov 29 05:40:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:40:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/.meta.tmp'
Nov 29 05:40:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/.meta.tmp' to config b'/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/.meta'
Nov 29 05:40:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:40:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "format": "json"}]: dispatch
Nov 29 05:40:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:40:49 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:40:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:40:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:49 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 87 KiB/s wr, 81 op/s
Nov 29 05:40:50 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:50 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "format": "json"}]: dispatch
Nov 29 05:40:50 compute-0 ceph-mon[75176]: pgmap v1106: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 87 KiB/s wr, 81 op/s
Nov 29 05:40:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 29 05:40:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 29 05:40:50 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 29 05:40:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "da42dd29-b8ca-4a89-a34f-b140d81e7bf9", "format": "json"}]: dispatch
Nov 29 05:40:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.00036214908103796316 of space, bias 4.0, pg target 0.43457889724555576 quantized to 16 (current 16)
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:40:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:40:51 compute-0 ceph-mon[75176]: osdmap e152: 3 total, 3 up, 3 in
Nov 29 05:40:51 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "da42dd29-b8ca-4a89-a34f-b140d81e7bf9", "format": "json"}]: dispatch
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 83 KiB/s wr, 77 op/s
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp'
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp' to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta'
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "format": "json"}]: dispatch
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:40:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:40:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:52 compute-0 ceph-mon[75176]: pgmap v1108: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 83 KiB/s wr, 77 op/s
Nov 29 05:40:52 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "Joe", "tenant_id": "e97b8963e55a4094b1cb702d19d887ba", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 05:40:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 29 05:40:52 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 05:40:52 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:40:52.792+0000 7fa4c75e5640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Nov 29 05:40:52 compute-0 ceph-mgr[75473]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Nov 29 05:40:53 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:40:53 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "format": "json"}]: dispatch
Nov 29 05:40:53 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "Joe", "tenant_id": "e97b8963e55a4094b1cb702d19d887ba", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:53 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 05:40:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 83 KiB/s wr, 77 op/s
Nov 29 05:40:54 compute-0 ceph-mon[75176]: pgmap v1109: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 83 KiB/s wr, 77 op/s
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "da42dd29-b8ca-4a89-a34f-b140d81e7bf9_b15f858e-3e57-4b58-8d27-5b096ba3f743", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9_b15f858e-3e57-4b58-8d27-5b096ba3f743, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9_b15f858e-3e57-4b58-8d27-5b096ba3f743, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "da42dd29-b8ca-4a89-a34f-b140d81e7bf9", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da42dd29-b8ca-4a89-a34f-b140d81e7bf9, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "snap_name": "4bc7ae62-8f19-489c-ab78-f250246cad8c", "format": "json"}]: dispatch
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:40:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:40:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s wr, 4 op/s
Nov 29 05:40:56 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "tempest-cephx-id-2011883581", "tenant_id": "e97b8963e55a4094b1cb702d19d887ba", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume authorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 05:40:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"} v 0) v1
Nov 29 05:40:56 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 05:40:56 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID tempest-cephx-id-2011883581 with tenant e97b8963e55a4094b1cb702d19d887ba
Nov 29 05:40:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2011883581", "caps": ["mds", "allow rw path=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fb7c7b44-2af1-44fc-8694-006120ff8320", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:40:56 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2011883581", "caps": ["mds", "allow rw path=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fb7c7b44-2af1-44fc-8694-006120ff8320", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:56 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2011883581", "caps": ["mds", "allow rw path=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fb7c7b44-2af1-44fc-8694-006120ff8320", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume authorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 05:40:56 compute-0 podman[270369]: 2025-11-29 05:40:56.993237515 +0000 UTC m=+0.047399124 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 05:40:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "da42dd29-b8ca-4a89-a34f-b140d81e7bf9_b15f858e-3e57-4b58-8d27-5b096ba3f743", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "da42dd29-b8ca-4a89-a34f-b140d81e7bf9", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:57 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "snap_name": "4bc7ae62-8f19-489c-ab78-f250246cad8c", "format": "json"}]: dispatch
Nov 29 05:40:57 compute-0 ceph-mon[75176]: pgmap v1110: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s wr, 4 op/s
Nov 29 05:40:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 05:40:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2011883581", "caps": ["mds", "allow rw path=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fb7c7b44-2af1-44fc-8694-006120ff8320", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:40:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2011883581", "caps": ["mds", "allow rw path=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_fb7c7b44-2af1-44fc-8694-006120ff8320", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:40:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s wr, 4 op/s
Nov 29 05:40:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 29 05:40:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 29 05:40:58 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "tempest-cephx-id-2011883581", "tenant_id": "e97b8963e55a4094b1cb702d19d887ba", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:40:58 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 29 05:40:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "e835c72c-a635-4c4e-baef-8e8d67cd9fec", "format": "json"}]: dispatch
Nov 29 05:40:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:58 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:40:59 compute-0 ceph-mon[75176]: pgmap v1111: 305 pgs: 305 active+clean; 64 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s wr, 4 op/s
Nov 29 05:40:59 compute-0 ceph-mon[75176]: osdmap e153: 3 total, 3 up, 3 in
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "snap_name": "4bc7ae62-8f19-489c-ab78-f250246cad8c_5fc56f62-3563-4637-98c4-f1c64fe4cf32", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c_5fc56f62-3563-4637-98c4-f1c64fe4cf32, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp'
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp' to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta'
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c_5fc56f62-3563-4637-98c4-f1c64fe4cf32, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "snap_name": "4bc7ae62-8f19-489c-ab78-f250246cad8c", "force": true, "format": "json"}]: dispatch
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp'
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta.tmp' to config b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d/.meta'
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4bc7ae62-8f19-489c-ab78-f250246cad8c, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'fb7c7b44-2af1-44fc-8694-006120ff8320'
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84
Nov 29 05:40:59 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84],prefix=session evict} (starting...)
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:40:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:41:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 216 B/s rd, 73 KiB/s wr, 5 op/s
Nov 29 05:41:00 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "e835c72c-a635-4c4e-baef-8e8d67cd9fec", "format": "json"}]: dispatch
Nov 29 05:41:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 29 05:41:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 29 05:41:00 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 29 05:41:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "snap_name": "4bc7ae62-8f19-489c-ab78-f250246cad8c_5fc56f62-3563-4637-98c4-f1c64fe4cf32", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "snap_name": "4bc7ae62-8f19-489c-ab78-f250246cad8c", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 05:41:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 05:41:01 compute-0 ceph-mon[75176]: pgmap v1113: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 216 B/s rd, 73 KiB/s wr, 5 op/s
Nov 29 05:41:01 compute-0 ceph-mon[75176]: osdmap e154: 3 total, 3 up, 3 in
Nov 29 05:41:02 compute-0 podman[270390]: 2025-11-29 05:41:02.073788724 +0000 UTC m=+0.128810425 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:41:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 86 KiB/s wr, 6 op/s
Nov 29 05:41:02 compute-0 ceph-mon[75176]: pgmap v1115: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 86 KiB/s wr, 6 op/s
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "format": "json"}]: dispatch
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:03 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:03.322+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0ff274bb-e3ac-4d57-8489-1cecf428692d' of type subvolume
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0ff274bb-e3ac-4d57-8489-1cecf428692d' of type subvolume
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0ff274bb-e3ac-4d57-8489-1cecf428692d'' moved to trashcan
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0ff274bb-e3ac-4d57-8489-1cecf428692d, vol_name:cephfs) < ""
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume deauthorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:41:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"} v 0) v1
Nov 29 05:41:03 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 05:41:03 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2011883581"} v 0) v1
Nov 29 05:41:03 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2011883581"}]: dispatch
Nov 29 05:41:03 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2011883581"}]': finished
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume deauthorize, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume evict, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2011883581, client_metadata.root=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84
Nov 29 05:41:03 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=tempest-cephx-id-2011883581,client_metadata.root=/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320/4c1cf698-c200-41d2-ac17-97d695ba9a84],prefix=session evict} (starting...)
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2011883581, format:json, prefix:fs subvolume evict, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:41:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 05:41:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2011883581"}]: dispatch
Nov 29 05:41:03 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2011883581"}]': finished
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "e835c72c-a635-4c4e-baef-8e8d67cd9fec_49b59679-70ca-40e8-b8a9-078b9d51d09b", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec_49b59679-70ca-40e8-b8a9-078b9d51d09b, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec_49b59679-70ca-40e8-b8a9-078b9d51d09b, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "e835c72c-a635-4c4e-baef-8e8d67cd9fec", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:41:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:e835c72c-a635-4c4e-baef-8e8d67cd9fec, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 45 KiB/s wr, 4 op/s
Nov 29 05:41:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "format": "json"}]: dispatch
Nov 29 05:41:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0ff274bb-e3ac-4d57-8489-1cecf428692d", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 05:41:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "auth_id": "tempest-cephx-id-2011883581", "format": "json"}]: dispatch
Nov 29 05:41:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "e835c72c-a635-4c4e-baef-8e8d67cd9fec_49b59679-70ca-40e8-b8a9-078b9d51d09b", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "e835c72c-a635-4c4e-baef-8e8d67cd9fec", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:04 compute-0 ceph-mon[75176]: pgmap v1116: 305 pgs: 305 active+clean; 64 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 45 KiB/s wr, 4 op/s
Nov 29 05:41:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 65 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 142 KiB/s wr, 11 op/s
Nov 29 05:41:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:41:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp'
Nov 29 05:41:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp' to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta'
Nov 29 05:41:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "format": "json"}]: dispatch
Nov 29 05:41:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:41:06 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:41:07 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 05:41:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 05:41:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) v1
Nov 29 05:41:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 05:41:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) v1
Nov 29 05:41:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.Joe"}]: dispatch
Nov 29 05:41:07 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Nov 29 05:41:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 05:41:07 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 05:41:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 05:41:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5
Nov 29 05:41:07 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22/cc5b8b78-4068-4d1a-9a26-90493fe411f5],prefix=session evict} (starting...)
Nov 29 05:41:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:41:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 05:41:07 compute-0 ceph-mon[75176]: pgmap v1117: 305 pgs: 305 active+clean; 65 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 142 KiB/s wr, 11 op/s
Nov 29 05:41:07 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:41:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.Joe", "format": "json"}]: dispatch
Nov 29 05:41:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.Joe"}]: dispatch
Nov 29 05:41:07 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Nov 29 05:41:07 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "bfb2e2f5-a7fe-4303-8582-2fb7923d4276", "format": "json"}]: dispatch
Nov 29 05:41:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 65 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 412 B/s rd, 115 KiB/s wr, 9 op/s
Nov 29 05:41:08 compute-0 sshd-session[270416]: Invalid user user1 from 45.120.216.232 port 60368
Nov 29 05:41:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 29 05:41:08 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:41:08 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "format": "json"}]: dispatch
Nov 29 05:41:08 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 05:41:08 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "auth_id": "Joe", "format": "json"}]: dispatch
Nov 29 05:41:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 29 05:41:08 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 29 05:41:08 compute-0 sshd-session[270416]: Received disconnect from 45.120.216.232 port 60368:11: Bye Bye [preauth]
Nov 29 05:41:08 compute-0 sshd-session[270416]: Disconnected from invalid user user1 45.120.216.232 port 60368 [preauth]
Nov 29 05:41:09 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "bfb2e2f5-a7fe-4303-8582-2fb7923d4276", "format": "json"}]: dispatch
Nov 29 05:41:09 compute-0 ceph-mon[75176]: pgmap v1118: 305 pgs: 305 active+clean; 65 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 412 B/s rd, 115 KiB/s wr, 9 op/s
Nov 29 05:41:09 compute-0 ceph-mon[75176]: osdmap e155: 3 total, 3 up, 3 in
Nov 29 05:41:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 757 B/s rd, 132 KiB/s wr, 10 op/s
Nov 29 05:41:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "snap_name": "006bb014-977b-4c9c-b290-16a1b0c02828", "format": "json"}]: dispatch
Nov 29 05:41:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "admin", "tenant_id": "4e135fffa1e64bf8b2e43bd33b51cf15", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:41:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 05:41:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) v1
Nov 29 05:41:10 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin", "format": "json"}]: dispatch
Nov 29 05:41:10 compute-0 ceph-mgr[75473]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 29 05:41:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 05:41:10 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:10.578+0000 7fa4c75e5640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 29 05:41:10 compute-0 ceph-mgr[75473]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Nov 29 05:41:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 29 05:41:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 29 05:41:10 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 29 05:41:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "bfb2e2f5-a7fe-4303-8582-2fb7923d4276_771e373d-f46c-4323-b1c4-c8472b9b21b1", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276_771e373d-f46c-4323-b1c4-c8472b9b21b1, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276_771e373d-f46c-4323-b1c4-c8472b9b21b1, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "bfb2e2f5-a7fe-4303-8582-2fb7923d4276", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bfb2e2f5-a7fe-4303-8582-2fb7923d4276, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:11 compute-0 ceph-mon[75176]: pgmap v1120: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 757 B/s rd, 132 KiB/s wr, 10 op/s
Nov 29 05:41:11 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin", "format": "json"}]: dispatch
Nov 29 05:41:11 compute-0 ceph-mon[75176]: osdmap e156: 3 total, 3 up, 3 in
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f976c040>)]
Nov 29 05:41:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 05:41:12 compute-0 podman[270419]: 2025-11-29 05:41:12.02342074 +0000 UTC m=+0.071406142 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 05:41:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 155 KiB/s wr, 12 op/s
Nov 29 05:41:12 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "snap_name": "006bb014-977b-4c9c-b290-16a1b0c02828", "format": "json"}]: dispatch
Nov 29 05:41:12 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "admin", "tenant_id": "4e135fffa1e64bf8b2e43bd33b51cf15", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:41:12 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "bfb2e2f5-a7fe-4303-8582-2fb7923d4276_771e373d-f46c-4323-b1c4-c8472b9b21b1", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:12 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "bfb2e2f5-a7fe-4303-8582-2fb7923d4276", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 29 05:41:13 compute-0 ceph-mon[75176]: pgmap v1122: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 155 KiB/s wr, 12 op/s
Nov 29 05:41:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 29 05:41:13 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 29 05:41:13 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.csskcz(active, since 32m)
Nov 29 05:41:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:41:13.756 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:41:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:41:13.757 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:41:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:41:13.757 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 77 KiB/s wr, 7 op/s
Nov 29 05:41:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:41:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1950147048' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:41:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:41:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1950147048' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:41:14 compute-0 ceph-mon[75176]: osdmap e157: 3 total, 3 up, 3 in
Nov 29 05:41:14 compute-0 ceph-mon[75176]: mgrmap e16: compute-0.csskcz(active, since 32m)
Nov 29 05:41:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1950147048' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:41:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1950147048' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "david", "tenant_id": "4e135fffa1e64bf8b2e43bd33b51cf15", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 05:41:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 29 05:41:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 05:41:14 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: Creating meta for ID david with tenant 4e135fffa1e64bf8b2e43bd33b51cf15
Nov 29 05:41:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_873c8599-1b6c-425f-8c5c-0a211fc50713", "mon", "allow r"], "format": "json"} v 0) v1
Nov 29 05:41:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_873c8599-1b6c-425f-8c5c-0a211fc50713", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:41:14 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_873c8599-1b6c-425f-8c5c-0a211fc50713", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, tenant_id:4e135fffa1e64bf8b2e43bd33b51cf15, vol_name:cephfs) < ""
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2", "format": "json"}]: dispatch
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "snap_name": "006bb014-977b-4c9c-b290-16a1b0c02828_74a96009-fd82-4ae6-b743-29ffffe9710a", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828_74a96009-fd82-4ae6-b743-29ffffe9710a, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp'
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp' to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta'
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828_74a96009-fd82-4ae6-b743-29ffffe9710a, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "snap_name": "006bb014-977b-4c9c-b290-16a1b0c02828", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp'
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta.tmp' to config b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852/.meta'
Nov 29 05:41:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:006bb014-977b-4c9c-b290-16a1b0c02828, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:15 compute-0 ceph-mon[75176]: pgmap v1124: 305 pgs: 305 active+clean; 65 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 77 KiB/s wr, 7 op/s
Nov 29 05:41:15 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 05:41:15 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_873c8599-1b6c-425f-8c5c-0a211fc50713", "mon", "allow r"], "format": "json"}]: dispatch
Nov 29 05:41:15 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25", "osd", "allow rw pool=cephfs.cephfs.data namespace=fsvolumens_873c8599-1b6c-425f-8c5c-0a211fc50713", "mon", "allow r"], "format": "json"}]': finished
Nov 29 05:41:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:41:15.773 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:41:15 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:41:15.774 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 05:41:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 903 B/s rd, 142 KiB/s wr, 11 op/s
Nov 29 05:41:16 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "david", "tenant_id": "4e135fffa1e64bf8b2e43bd33b51cf15", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:41:16 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2", "format": "json"}]: dispatch
Nov 29 05:41:16 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "snap_name": "006bb014-977b-4c9c-b290-16a1b0c02828_74a96009-fd82-4ae6-b743-29ffffe9710a", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:16 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "snap_name": "006bb014-977b-4c9c-b290-16a1b0c02828", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:16 compute-0 ceph-mon[75176]: pgmap v1125: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 903 B/s rd, 142 KiB/s wr, 11 op/s
Nov 29 05:41:16 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2_40d5debf-b44f-4de6-940d-1c9aafb1724f", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:16 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2_40d5debf-b44f-4de6-940d-1c9aafb1724f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:16 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:41:16 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:41:16 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2_40d5debf-b44f-4de6-940d-1c9aafb1724f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:16 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:16 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:16 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:41:16 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:41:16 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:16 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:41:16.776 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:41:17 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2_40d5debf-b44f-4de6-940d-1c9aafb1724f", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:17 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "5c579ca3-9ef8-4a71-8a77-4ef6bcc0fab2", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:41:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 05:41:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd/.meta.tmp'
Nov 29 05:41:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd/.meta.tmp' to config b'/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd/.meta'
Nov 29 05:41:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 05:41:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "format": "json"}]: dispatch
Nov 29 05:41:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 05:41:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 05:41:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:41:17 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:41:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 83 KiB/s wr, 5 op/s
Nov 29 05:41:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 29 05:41:18 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:41:18 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "format": "json"}]: dispatch
Nov 29 05:41:18 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:41:18 compute-0 ceph-mon[75176]: pgmap v1126: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 83 KiB/s wr, 5 op/s
Nov 29 05:41:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 29 05:41:18 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 29 05:41:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "format": "json"}]: dispatch
Nov 29 05:41:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:18 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:18.299+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9378b5f8-f3c7-4db4-98d1-4cf3955df852' of type subvolume
Nov 29 05:41:18 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9378b5f8-f3c7-4db4-98d1-4cf3955df852' of type subvolume
Nov 29 05:41:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9378b5f8-f3c7-4db4-98d1-4cf3955df852'' moved to trashcan
Nov 29 05:41:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:41:18 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9378b5f8-f3c7-4db4-98d1-4cf3955df852, vol_name:cephfs) < ""
Nov 29 05:41:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 29 05:41:19 compute-0 ceph-mon[75176]: osdmap e158: 3 total, 3 up, 3 in
Nov 29 05:41:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "format": "json"}]: dispatch
Nov 29 05:41:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9378b5f8-f3c7-4db4-98d1-4cf3955df852", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 29 05:41:19 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 29 05:41:19 compute-0 sudo[270441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:41:19 compute-0 sudo[270441]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:19 compute-0 sudo[270441]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:20 compute-0 sudo[270466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:41:20 compute-0 sudo[270466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:20 compute-0 sudo[270466]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:20 compute-0 sudo[270491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:41:20 compute-0 sudo[270491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:20 compute-0 sudo[270491]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 890 B/s rd, 170 KiB/s wr, 13 op/s
Nov 29 05:41:20 compute-0 sudo[270516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:41:20 compute-0 sudo[270516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:20 compute-0 ceph-mon[75176]: osdmap e159: 3 total, 3 up, 3 in
Nov 29 05:41:20 compute-0 ceph-mon[75176]: pgmap v1129: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 890 B/s rd, 170 KiB/s wr, 13 op/s
Nov 29 05:41:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 29 05:41:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 29 05:41:20 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 29 05:41:20 compute-0 podman[270613]: 2025-11-29 05:41:20.666521985 +0000 UTC m=+0.079331434 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:41:20 compute-0 podman[270613]: 2025-11-29 05:41:20.796695212 +0000 UTC m=+0.209504611 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "auth_id": "david", "tenant_id": "e97b8963e55a4094b1cb702d19d887ba", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 05:41:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 29 05:41:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, tenant_id:e97b8963e55a4094b1cb702d19d887ba, vol_name:cephfs) < ""
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Nov 29 05:41:21 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:21.360+0000 7fa4c75e5640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "2df5bdeb-2a6a-41fb-86c0-a340aafa411f", "format": "json"}]: dispatch
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:21 compute-0 ceph-mon[75176]: osdmap e160: 3 total, 3 up, 3 in
Nov 29 05:41:21 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 05:41:21 compute-0 sudo[270516]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:41:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:41:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:41:21 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:41:21 compute-0 sudo[270773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:41:21 compute-0 sudo[270773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:21 compute-0 sudo[270773]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp'
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp' to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta'
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:41:21 compute-0 sudo[270798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:41:21 compute-0 sudo[270798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "format": "json"}]: dispatch
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:41:21 compute-0 sudo[270798]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:21 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:41:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:41:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:41:21 compute-0 sudo[270823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:41:21 compute-0 sudo[270823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:21 compute-0 sudo[270823]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:22 compute-0 sudo[270848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:41:22 compute-0 sudo[270848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 7 op/s
Nov 29 05:41:22 compute-0 sudo[270848]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:22 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "auth_id": "david", "tenant_id": "e97b8963e55a4094b1cb702d19d887ba", "access_level": "rw", "format": "json"}]: dispatch
Nov 29 05:41:22 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "2df5bdeb-2a6a-41fb-86c0-a340aafa411f", "format": "json"}]: dispatch
Nov 29 05:41:22 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:41:22 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:41:22 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:41:22 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "format": "json"}]: dispatch
Nov 29 05:41:22 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:41:22 compute-0 ceph-mon[75176]: pgmap v1131: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 7 op/s
Nov 29 05:41:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 05:41:22 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:41:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:41:22 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:41:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:41:22 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:41:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:41:22 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:41:22 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 0cae118e-67b1-4a60-b785-6679b650e472 does not exist
Nov 29 05:41:22 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 9eb7f337-a29f-439e-8c01-ef31460ed309 does not exist
Nov 29 05:41:22 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 26718b66-413d-45b8-852b-558290df7b71 does not exist
Nov 29 05:41:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:41:22 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:41:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:41:22 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:41:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:41:22 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:41:22 compute-0 sudo[270904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:41:22 compute-0 sudo[270904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:22 compute-0 sudo[270904]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:22 compute-0 sudo[270929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:41:22 compute-0 sudo[270929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:22 compute-0 sudo[270929]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:22 compute-0 sudo[270954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:41:22 compute-0 sudo[270954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:22 compute-0 sudo[270954]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:23 compute-0 sudo[270979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:41:23 compute-0 sudo[270979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:23 compute-0 podman[271045]: 2025-11-29 05:41:23.383760085 +0000 UTC m=+0.060878409 container create 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:41:23 compute-0 systemd[1]: Started libpod-conmon-69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1.scope.
Nov 29 05:41:23 compute-0 podman[271045]: 2025-11-29 05:41:23.360025883 +0000 UTC m=+0.037144287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:41:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:41:23 compute-0 podman[271045]: 2025-11-29 05:41:23.480490546 +0000 UTC m=+0.157608890 container init 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 05:41:23 compute-0 podman[271045]: 2025-11-29 05:41:23.490308783 +0000 UTC m=+0.167427107 container start 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:41:23 compute-0 podman[271045]: 2025-11-29 05:41:23.493629492 +0000 UTC m=+0.170747936 container attach 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 05:41:23 compute-0 goofy_lederberg[271061]: 167 167
Nov 29 05:41:23 compute-0 systemd[1]: libpod-69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1.scope: Deactivated successfully.
Nov 29 05:41:23 compute-0 podman[271045]: 2025-11-29 05:41:23.498385727 +0000 UTC m=+0.175504061 container died 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c71b7b8d8ea0fb9986090fb1259b219330c8053d144f160706f3d9395b79815-merged.mount: Deactivated successfully.
Nov 29 05:41:23 compute-0 podman[271045]: 2025-11-29 05:41:23.535977673 +0000 UTC m=+0.213095997 container remove 69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:41:23 compute-0 systemd[1]: libpod-conmon-69d719aeff35dec8c271434e5249b257956e793cbfdff76a648fb7aa938050b1.scope: Deactivated successfully.
Nov 29 05:41:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:41:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:41:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:41:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:41:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:41:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:41:23 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:41:23 compute-0 podman[271085]: 2025-11-29 05:41:23.713330738 +0000 UTC m=+0.046786779 container create 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:41:23 compute-0 systemd[1]: Started libpod-conmon-5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75.scope.
Nov 29 05:41:23 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:23 compute-0 podman[271085]: 2025-11-29 05:41:23.692213729 +0000 UTC m=+0.025669850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:41:23 compute-0 podman[271085]: 2025-11-29 05:41:23.794718889 +0000 UTC m=+0.128175040 container init 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:41:23 compute-0 podman[271085]: 2025-11-29 05:41:23.80551413 +0000 UTC m=+0.138970211 container start 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:41:23 compute-0 podman[271085]: 2025-11-29 05:41:23.809244489 +0000 UTC m=+0.142700550 container attach 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:41:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 7 op/s
Nov 29 05:41:24 compute-0 ceph-mon[75176]: pgmap v1132: 305 pgs: 305 active+clean; 66 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 84 KiB/s wr, 7 op/s
Nov 29 05:41:24 compute-0 quirky_brattain[271102]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:41:24 compute-0 quirky_brattain[271102]: --> relative data size: 1.0
Nov 29 05:41:24 compute-0 quirky_brattain[271102]: --> All data devices are unavailable
Nov 29 05:41:24 compute-0 systemd[1]: libpod-5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75.scope: Deactivated successfully.
Nov 29 05:41:24 compute-0 systemd[1]: libpod-5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75.scope: Consumed 1.096s CPU time.
Nov 29 05:41:24 compute-0 podman[271085]: 2025-11-29 05:41:24.961935171 +0000 UTC m=+1.295391222 container died 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 05:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b43ee539b8b3f2c7177160a39a20975e6807571ff2ee559d9917cb8edaa6050b-merged.mount: Deactivated successfully.
Nov 29 05:41:25 compute-0 podman[271085]: 2025-11-29 05:41:25.00501881 +0000 UTC m=+1.338474841 container remove 5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brattain, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 05:41:25 compute-0 systemd[1]: libpod-conmon-5cc088afcc949056b324a2e2e1dbd449a875ce66135a74b82ab36e8415148d75.scope: Deactivated successfully.
Nov 29 05:41:25 compute-0 sudo[270979]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '28265ef5-ca45-4354-be2b-4e281fa424cd'
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd/b0384108-1904-48a8-a8b3-3bb88d8155ec
Nov 29 05:41:25 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd/b0384108-1904-48a8-a8b3-3bb88d8155ec],prefix=session evict} (starting...)
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 05:41:25 compute-0 sudo[271145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:41:25 compute-0 sudo[271145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:25 compute-0 sudo[271145]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:25 compute-0 sudo[271171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:41:25 compute-0 sudo[271171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:25 compute-0 sudo[271171]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:25 compute-0 sudo[271196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:41:25 compute-0 sudo[271196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:25 compute-0 sudo[271196]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:25 compute-0 sudo[271221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:41:25 compute-0 sudo[271221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2", "format": "json"}]: dispatch
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:41:25 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:41:25 compute-0 podman[271287]: 2025-11-29 05:41:25.600791968 +0000 UTC m=+0.034564664 container create 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:41:25 compute-0 systemd[1]: Started libpod-conmon-38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b.scope.
Nov 29 05:41:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:41:25 compute-0 podman[271287]: 2025-11-29 05:41:25.670290964 +0000 UTC m=+0.104063680 container init 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 05:41:25 compute-0 podman[271287]: 2025-11-29 05:41:25.67680762 +0000 UTC m=+0.110580316 container start 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:41:25 compute-0 podman[271287]: 2025-11-29 05:41:25.680187662 +0000 UTC m=+0.113960378 container attach 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:41:25 compute-0 ecstatic_brown[271304]: 167 167
Nov 29 05:41:25 compute-0 podman[271287]: 2025-11-29 05:41:25.585436308 +0000 UTC m=+0.019209024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:41:25 compute-0 systemd[1]: libpod-38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b.scope: Deactivated successfully.
Nov 29 05:41:25 compute-0 podman[271287]: 2025-11-29 05:41:25.681705019 +0000 UTC m=+0.115477725 container died 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:41:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-03dea9f972729653c9ee72230a6bbd0297db961f3982f034360a12ef0623d0d1-merged.mount: Deactivated successfully.
Nov 29 05:41:25 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 05:41:25 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 05:41:25 compute-0 podman[271287]: 2025-11-29 05:41:25.720281088 +0000 UTC m=+0.154053784 container remove 38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_brown, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:41:25 compute-0 systemd[1]: libpod-conmon-38335fe38d9fd4eaa9b37c13059d502fcd670843a9829350a690961ceeddd18b.scope: Deactivated successfully.
Nov 29 05:41:25 compute-0 podman[271327]: 2025-11-29 05:41:25.92946349 +0000 UTC m=+0.049426273 container create b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:41:25 compute-0 systemd[1]: Started libpod-conmon-b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282.scope.
Nov 29 05:41:25 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adeadb41e8ff66abd5eb469e8ecd2f0be4f369834e269ac101b6fae4a0bc505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adeadb41e8ff66abd5eb469e8ecd2f0be4f369834e269ac101b6fae4a0bc505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adeadb41e8ff66abd5eb469e8ecd2f0be4f369834e269ac101b6fae4a0bc505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1adeadb41e8ff66abd5eb469e8ecd2f0be4f369834e269ac101b6fae4a0bc505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:26 compute-0 podman[271327]: 2025-11-29 05:41:26.00620965 +0000 UTC m=+0.126172473 container init b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 05:41:26 compute-0 podman[271327]: 2025-11-29 05:41:25.91497695 +0000 UTC m=+0.034939753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:41:26 compute-0 podman[271327]: 2025-11-29 05:41:26.013754572 +0000 UTC m=+0.133717365 container start b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 05:41:26 compute-0 podman[271327]: 2025-11-29 05:41:26.017316458 +0000 UTC m=+0.137279261 container attach b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:41:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 115 KiB/s wr, 9 op/s
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]: {
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:     "0": [
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:         {
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "devices": [
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "/dev/loop3"
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             ],
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_name": "ceph_lv0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_size": "21470642176",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "name": "ceph_lv0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "tags": {
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.cluster_name": "ceph",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.crush_device_class": "",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.encrypted": "0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.osd_id": "0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.type": "block",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.vdo": "0"
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             },
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "type": "block",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "vg_name": "ceph_vg0"
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:         }
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:     ],
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:     "1": [
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:         {
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "devices": [
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "/dev/loop4"
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             ],
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_name": "ceph_lv1",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_size": "21470642176",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "name": "ceph_lv1",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "tags": {
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.cluster_name": "ceph",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.crush_device_class": "",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.encrypted": "0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.osd_id": "1",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.type": "block",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.vdo": "0"
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             },
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "type": "block",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "vg_name": "ceph_vg1"
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:         }
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:     ],
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:     "2": [
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:         {
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "devices": [
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "/dev/loop5"
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             ],
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_name": "ceph_lv2",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_size": "21470642176",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "name": "ceph_lv2",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "tags": {
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.cluster_name": "ceph",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.crush_device_class": "",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.encrypted": "0",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.osd_id": "2",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.type": "block",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:                 "ceph.vdo": "0"
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             },
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "type": "block",
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:             "vg_name": "ceph_vg2"
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:         }
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]:     ]
Nov 29 05:41:26 compute-0 romantic_varahamihira[271343]: }
Nov 29 05:41:26 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2", "format": "json"}]: dispatch
Nov 29 05:41:26 compute-0 ceph-mon[75176]: pgmap v1133: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 782 B/s rd, 115 KiB/s wr, 9 op/s
Nov 29 05:41:26 compute-0 systemd[1]: libpod-b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282.scope: Deactivated successfully.
Nov 29 05:41:26 compute-0 podman[271327]: 2025-11-29 05:41:26.818057656 +0000 UTC m=+0.938020499 container died b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:41:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1adeadb41e8ff66abd5eb469e8ecd2f0be4f369834e269ac101b6fae4a0bc505-merged.mount: Deactivated successfully.
Nov 29 05:41:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "2df5bdeb-2a6a-41fb-86c0-a340aafa411f_dea690a8-7401-442a-8d5e-63a333d20ef8", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f_dea690a8-7401-442a-8d5e-63a333d20ef8, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:41:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:41:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f_dea690a8-7401-442a-8d5e-63a333d20ef8, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "2df5bdeb-2a6a-41fb-86c0-a340aafa411f", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:27 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:27 compute-0 podman[271327]: 2025-11-29 05:41:27.423026177 +0000 UTC m=+1.542988960 container remove b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_varahamihira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 29 05:41:27 compute-0 sudo[271221]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:27 compute-0 systemd[1]: libpod-conmon-b52d48713482279900e902cab5a95d91ed483725fa8308ae9accbdbb91b59282.scope: Deactivated successfully.
Nov 29 05:41:27 compute-0 sudo[271364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:41:27 compute-0 sudo[271364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:27 compute-0 sudo[271364]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:27 compute-0 sudo[271396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:41:27 compute-0 sudo[271396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:27 compute-0 podman[271382]: 2025-11-29 05:41:27.630049336 +0000 UTC m=+0.085078741 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 05:41:27 compute-0 sudo[271396]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:27 compute-0 sudo[271433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:41:27 compute-0 sudo[271433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:27 compute-0 sudo[271433]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:27 compute-0 sudo[271458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:41:27 compute-0 sudo[271458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:28 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "2df5bdeb-2a6a-41fb-86c0-a340aafa411f_dea690a8-7401-442a-8d5e-63a333d20ef8", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:28 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "2df5bdeb-2a6a-41fb-86c0-a340aafa411f", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 695 B/s rd, 102 KiB/s wr, 8 op/s
Nov 29 05:41:28 compute-0 podman[271524]: 2025-11-29 05:41:28.13929326 +0000 UTC m=+0.047328692 container create 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2df5bdeb-2a6a-41fb-86c0-a340aafa411f, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:28 compute-0 systemd[1]: Started libpod-conmon-93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b.scope.
Nov 29 05:41:28 compute-0 podman[271524]: 2025-11-29 05:41:28.113906788 +0000 UTC m=+0.021942240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:41:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:41:28 compute-0 podman[271524]: 2025-11-29 05:41:28.254102947 +0000 UTC m=+0.162138409 container init 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:41:28 compute-0 podman[271524]: 2025-11-29 05:41:28.260544383 +0000 UTC m=+0.168579815 container start 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 05:41:28 compute-0 loving_goodall[271541]: 167 167
Nov 29 05:41:28 compute-0 systemd[1]: libpod-93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b.scope: Deactivated successfully.
Nov 29 05:41:28 compute-0 podman[271524]: 2025-11-29 05:41:28.291168371 +0000 UTC m=+0.199203803 container attach 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:41:28 compute-0 podman[271524]: 2025-11-29 05:41:28.291626252 +0000 UTC m=+0.199661674 container died 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:41:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-44255bd98b4c9d1565568c96d9ff4e8f7d2612a70bbbbc1dfbafbc203a31f816-merged.mount: Deactivated successfully.
Nov 29 05:41:28 compute-0 podman[271524]: 2025-11-29 05:41:28.344979768 +0000 UTC m=+0.253015200 container remove 93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:41:28 compute-0 systemd[1]: libpod-conmon-93a6c9b83b1f2024ade9f83847552f65d17b8d3e3c92928f205fcd79fa22a63b.scope: Deactivated successfully.
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:41:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) v1
Nov 29 05:41:28 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 05:41:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) v1
Nov 29 05:41:28 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.david"}]: dispatch
Nov 29 05:41:28 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25
Nov 29 05:41:28 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713/e4553e4d-304b-4c6d-85d9-c62092dcad25],prefix=session evict} (starting...)
Nov 29 05:41:28 compute-0 podman[271566]: 2025-11-29 05:41:28.493765663 +0000 UTC m=+0.039744718 container create 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:41:28 compute-0 systemd[1]: Started libpod-conmon-4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc.scope.
Nov 29 05:41:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718c46e211b062c1f921d89bec18d865aff7130d5d6825b142a608001f00f4f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718c46e211b062c1f921d89bec18d865aff7130d5d6825b142a608001f00f4f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718c46e211b062c1f921d89bec18d865aff7130d5d6825b142a608001f00f4f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718c46e211b062c1f921d89bec18d865aff7130d5d6825b142a608001f00f4f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:41:28 compute-0 podman[271566]: 2025-11-29 05:41:28.569384346 +0000 UTC m=+0.115363421 container init 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:41:28 compute-0 podman[271566]: 2025-11-29 05:41:28.477113412 +0000 UTC m=+0.023092487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:41:28 compute-0 podman[271566]: 2025-11-29 05:41:28.579655014 +0000 UTC m=+0.125634059 container start 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:41:28 compute-0 podman[271566]: 2025-11-29 05:41:28.582859601 +0000 UTC m=+0.128838666 container attach 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2", "target_sub_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, target_sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp'
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp' to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta'
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.clone_index] tracking-id ec1326b5-a5e4-4d5f-8f2c-27b9bccec565 for path b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab'
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp'
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp' to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta'
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, target_sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:28 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:28.980+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:28 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:28.980+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:28 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:28.980+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:28 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:28.980+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:28 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:28.980+0000 7fa4cc5ef640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab
Nov 29 05:41:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, a4fbeb19-4b4a-408e-8a0f-278794e0aaab)
Nov 29 05:41:29 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:29.000+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:29 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:29 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:29.000+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:29 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:29 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:29.000+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:29 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:29 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:29.000+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:29 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:29 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:29.000+0000 7fa4ccdf0640 -1 client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:29 compute-0 ceph-mgr[75473]: client.0 error registering admin socket command: (17) File exists
Nov 29 05:41:29 compute-0 ceph-mon[75176]: pgmap v1134: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 695 B/s rd, 102 KiB/s wr, 8 op/s
Nov 29 05:41:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.david", "format": "json"}]: dispatch
Nov 29 05:41:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth rm", "entity": "client.david"}]: dispatch
Nov 29 05:41:29 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Nov 29 05:41:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, a4fbeb19-4b4a-408e-8a0f-278794e0aaab) -- by 0 seconds
Nov 29 05:41:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp'
Nov 29 05:41:29 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp' to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta'
Nov 29 05:41:29 compute-0 happy_noether[271584]: {
Nov 29 05:41:29 compute-0 happy_noether[271584]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "osd_id": 0,
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "type": "bluestore"
Nov 29 05:41:29 compute-0 happy_noether[271584]:     },
Nov 29 05:41:29 compute-0 happy_noether[271584]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "osd_id": 1,
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "type": "bluestore"
Nov 29 05:41:29 compute-0 happy_noether[271584]:     },
Nov 29 05:41:29 compute-0 happy_noether[271584]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "osd_id": 2,
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:41:29 compute-0 happy_noether[271584]:         "type": "bluestore"
Nov 29 05:41:29 compute-0 happy_noether[271584]:     }
Nov 29 05:41:29 compute-0 happy_noether[271584]: }
Nov 29 05:41:29 compute-0 systemd[1]: libpod-4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc.scope: Deactivated successfully.
Nov 29 05:41:29 compute-0 podman[271566]: 2025-11-29 05:41:29.555450092 +0000 UTC m=+1.101429187 container died 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:41:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-718c46e211b062c1f921d89bec18d865aff7130d5d6825b142a608001f00f4f4-merged.mount: Deactivated successfully.
Nov 29 05:41:29 compute-0 podman[271566]: 2025-11-29 05:41:29.603247854 +0000 UTC m=+1.149226909 container remove 4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_noether, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:41:29 compute-0 systemd[1]: libpod-conmon-4e4ad4c3ad87d224b5f12349ce8d1acd710e9145d75a809fc539eccf50c316fc.scope: Deactivated successfully.
Nov 29 05:41:29 compute-0 sudo[271458]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:41:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:41:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:41:29 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:41:29 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev f8658792-6b02-4ab8-95a2-65990a058386 does not exist
Nov 29 05:41:29 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 8703521b-8fdd-4efe-9e31-506a2e07e73b does not exist
Nov 29 05:41:29 compute-0 sudo[271655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:41:29 compute-0 sudo[271655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:29 compute-0 sudo[271655]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:29 compute-0 sudo[271680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:41:29 compute-0 sudo[271680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:41:29 compute-0 sudo[271680]: pam_unix(sudo:session): session closed for user root
Nov 29 05:41:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 05:41:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "david", "format": "json"}]: dispatch
Nov 29 05:41:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2", "target_sub_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 05:41:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 05:41:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:41:30 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:41:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 111 KiB/s wr, 7 op/s
Nov 29 05:41:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:31 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.csskcz(active, since 32m)
Nov 29 05:41:31 compute-0 ceph-mon[75176]: pgmap v1135: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 111 KiB/s wr, 7 op/s
Nov 29 05:41:32 compute-0 ceph-mon[75176]: mgrmap e17: compute-0.csskcz(active, since 32m)
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 96 KiB/s wr, 6 op/s
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "22583c21-c0dc-4991-a17b-a735e6d7c9f4_70eab419-f284-47c7-b0cd-6e257fe57f1d", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4_70eab419-f284-47c7-b0cd-6e257fe57f1d, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.snap/f919bca8-f41c-47b0-8fca-f8f7988969c2/0b15d7c5-c29f-491e-8e79-ff980dbb8d2d' to b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/a159fbfa-75c1-4d65-9295-73f51ae6b10d'
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4_70eab419-f284-47c7-b0cd-6e257fe57f1d, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "22583c21-c0dc-4991-a17b-a735e6d7c9f4", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp'
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta.tmp' to config b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10/.meta'
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:22583c21-c0dc-4991-a17b-a735e6d7c9f4, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp'
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp' to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta'
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.clone_index] untracking ec1326b5-a5e4-4d5f-8f2c-27b9bccec565
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp'
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp' to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta'
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp'
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta.tmp' to config b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab/.meta'
Nov 29 05:41:32 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, a4fbeb19-4b4a-408e-8a0f-278794e0aaab)
Nov 29 05:41:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "format": "json"}]: dispatch
Nov 29 05:41:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:28265ef5-ca45-4354-be2b-4e281fa424cd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:28265ef5-ca45-4354-be2b-4e281fa424cd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:33 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '28265ef5-ca45-4354-be2b-4e281fa424cd' of type subvolume
Nov 29 05:41:33 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:33.033+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '28265ef5-ca45-4354-be2b-4e281fa424cd' of type subvolume
Nov 29 05:41:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 05:41:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 29 05:41:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 29 05:41:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/28265ef5-ca45-4354-be2b-4e281fa424cd'' moved to trashcan
Nov 29 05:41:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:41:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:28265ef5-ca45-4354-be2b-4e281fa424cd, vol_name:cephfs) < ""
Nov 29 05:41:33 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 29 05:41:33 compute-0 ceph-mon[75176]: pgmap v1136: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 96 KiB/s wr, 6 op/s
Nov 29 05:41:33 compute-0 podman[271705]: 2025-11-29 05:41:33.117065673 +0000 UTC m=+0.160934100 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 05:41:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 29 05:41:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 29 05:41:34 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 29 05:41:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "22583c21-c0dc-4991-a17b-a735e6d7c9f4_70eab419-f284-47c7-b0cd-6e257fe57f1d", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "snap_name": "22583c21-c0dc-4991-a17b-a735e6d7c9f4", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "format": "json"}]: dispatch
Nov 29 05:41:34 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "28265ef5-ca45-4354-be2b-4e281fa424cd", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:34 compute-0 ceph-mon[75176]: osdmap e161: 3 total, 3 up, 3 in
Nov 29 05:41:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 6 op/s
Nov 29 05:41:35 compute-0 ceph-mon[75176]: osdmap e162: 3 total, 3 up, 3 in
Nov 29 05:41:35 compute-0 ceph-mon[75176]: pgmap v1139: 305 pgs: 305 active+clean; 67 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 88 KiB/s wr, 6 op/s
Nov 29 05:41:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "format": "json"}]: dispatch
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:07a65cd4-2777-43ad-b684-b3508a87dd10, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:07a65cd4-2777-43ad-b684-b3508a87dd10, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '07a65cd4-2777-43ad-b684-b3508a87dd10' of type subvolume
Nov 29 05:41:35 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:35.737+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '07a65cd4-2777-43ad-b684-b3508a87dd10' of type subvolume
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/07a65cd4-2777-43ad-b684-b3508a87dd10'' moved to trashcan
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:07a65cd4-2777-43ad-b684-b3508a87dd10, vol_name:cephfs) < ""
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "format": "json"}]: dispatch
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fb7c7b44-2af1-44fc-8694-006120ff8320, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fb7c7b44-2af1-44fc-8694-006120ff8320, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb7c7b44-2af1-44fc-8694-006120ff8320' of type subvolume
Nov 29 05:41:35 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:35.898+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb7c7b44-2af1-44fc-8694-006120ff8320' of type subvolume
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fb7c7b44-2af1-44fc-8694-006120ff8320'' moved to trashcan
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:41:35 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb7c7b44-2af1-44fc-8694-006120ff8320, vol_name:cephfs) < ""
Nov 29 05:41:35 compute-0 nova_compute[254898]: 2025-11-29 05:41:35.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:41:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 157 KiB/s wr, 12 op/s
Nov 29 05:41:36 compute-0 nova_compute[254898]: 2025-11-29 05:41:36.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:41:36 compute-0 nova_compute[254898]: 2025-11-29 05:41:36.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:41:36 compute-0 nova_compute[254898]: 2025-11-29 05:41:36.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:41:36 compute-0 nova_compute[254898]: 2025-11-29 05:41:36.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:41:36 compute-0 nova_compute[254898]: 2025-11-29 05:41:36.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:41:37 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "format": "json"}]: dispatch
Nov 29 05:41:37 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "07a65cd4-2777-43ad-b684-b3508a87dd10", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:37 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "format": "json"}]: dispatch
Nov 29 05:41:37 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb7c7b44-2af1-44fc-8694-006120ff8320", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:37 compute-0 ceph-mon[75176]: pgmap v1140: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 157 KiB/s wr, 12 op/s
Nov 29 05:41:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 69 KiB/s wr, 5 op/s
Nov 29 05:41:38 compute-0 nova_compute[254898]: 2025-11-29 05:41:38.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:41:38 compute-0 nova_compute[254898]: 2025-11-29 05:41:38.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:41:38 compute-0 nova_compute[254898]: 2025-11-29 05:41:38.996 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:41:38 compute-0 nova_compute[254898]: 2025-11-29 05:41:38.997 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:41:38 compute-0 nova_compute[254898]: 2025-11-29 05:41:38.997 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:41:38 compute-0 nova_compute[254898]: 2025-11-29 05:41:38.997 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:41:38 compute-0 nova_compute[254898]: 2025-11-29 05:41:38.997 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:41:39 compute-0 ceph-mon[75176]: pgmap v1141: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 639 B/s rd, 69 KiB/s wr, 5 op/s
Nov 29 05:41:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "format": "json"}]: dispatch
Nov 29 05:41:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a0e01f60-977a-4212-be2c-851b3318eb22, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a0e01f60-977a-4212-be2c-851b3318eb22, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:39 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0e01f60-977a-4212-be2c-851b3318eb22' of type subvolume
Nov 29 05:41:39 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:39.339+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a0e01f60-977a-4212-be2c-851b3318eb22' of type subvolume
Nov 29 05:41:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 05:41:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a0e01f60-977a-4212-be2c-851b3318eb22'' moved to trashcan
Nov 29 05:41:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:41:39 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a0e01f60-977a-4212-be2c-851b3318eb22, vol_name:cephfs) < ""
Nov 29 05:41:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:41:39 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/839332280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:41:39 compute-0 nova_compute[254898]: 2025-11-29 05:41:39.473 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:41:39 compute-0 nova_compute[254898]: 2025-11-29 05:41:39.648 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:41:39 compute-0 nova_compute[254898]: 2025-11-29 05:41:39.649 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5033MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:41:39 compute-0 nova_compute[254898]: 2025-11-29 05:41:39.649 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:41:39 compute-0 nova_compute[254898]: 2025-11-29 05:41:39.649 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:41:39 compute-0 nova_compute[254898]: 2025-11-29 05:41:39.721 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:41:39 compute-0 nova_compute[254898]: 2025-11-29 05:41:39.721 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:41:39 compute-0 nova_compute[254898]: 2025-11-29 05:41:39.738 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:41:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 129 KiB/s wr, 10 op/s
Nov 29 05:41:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:41:40 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182515748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:41:40 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/839332280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:41:40 compute-0 nova_compute[254898]: 2025-11-29 05:41:40.209 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:41:40 compute-0 nova_compute[254898]: 2025-11-29 05:41:40.215 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:41:40 compute-0 nova_compute[254898]: 2025-11-29 05:41:40.234 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:41:40 compute-0 nova_compute[254898]: 2025-11-29 05:41:40.235 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:41:40 compute-0 nova_compute[254898]: 2025-11-29 05:41:40.236 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:41:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 29 05:41:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 29 05:41:40 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 29 05:41:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "format": "json"}]: dispatch
Nov 29 05:41:41 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a0e01f60-977a-4212-be2c-851b3318eb22", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:41 compute-0 ceph-mon[75176]: pgmap v1142: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 129 KiB/s wr, 10 op/s
Nov 29 05:41:41 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/182515748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:41:41 compute-0 ceph-mon[75176]: osdmap e163: 3 total, 3 up, 3 in
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:41:41
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.mgr', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.control', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:41:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:41:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 128 KiB/s wr, 10 op/s
Nov 29 05:41:42 compute-0 nova_compute[254898]: 2025-11-29 05:41:42.232 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:41:42 compute-0 nova_compute[254898]: 2025-11-29 05:41:42.232 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:41:42 compute-0 nova_compute[254898]: 2025-11-29 05:41:42.232 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:41:42 compute-0 nova_compute[254898]: 2025-11-29 05:41:42.232 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:41:42 compute-0 nova_compute[254898]: 2025-11-29 05:41:42.244 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:41:43 compute-0 podman[271776]: 2025-11-29 05:41:43.045238475 +0000 UTC m=+0.089517939 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "admin", "format": "json"}]: dispatch
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Nov 29 05:41:43 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:43.115+0000 7fa4c75e5640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Nov 29 05:41:43 compute-0 ceph-mon[75176]: pgmap v1144: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 128 KiB/s wr, 10 op/s
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "format": "json"}]: dispatch
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:873c8599-1b6c-425f-8c5c-0a211fc50713, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:873c8599-1b6c-425f-8c5c-0a211fc50713, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:41:43 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:41:43.223+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '873c8599-1b6c-425f-8c5c-0a211fc50713' of type subvolume
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '873c8599-1b6c-425f-8c5c-0a211fc50713' of type subvolume
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/873c8599-1b6c-425f-8c5c-0a211fc50713'' moved to trashcan
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:41:43 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:873c8599-1b6c-425f-8c5c-0a211fc50713, vol_name:cephfs) < ""
Nov 29 05:41:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 103 KiB/s wr, 8 op/s
Nov 29 05:41:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "auth_id": "admin", "format": "json"}]: dispatch
Nov 29 05:41:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "format": "json"}]: dispatch
Nov 29 05:41:44 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "873c8599-1b6c-425f-8c5c-0a211fc50713", "force": true, "format": "json"}]: dispatch
Nov 29 05:41:45 compute-0 ceph-mon[75176]: pgmap v1145: 305 pgs: 305 active+clean; 68 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 103 KiB/s wr, 8 op/s
Nov 29 05:41:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 78 KiB/s wr, 5 op/s
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.237882) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906237959, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1517, "num_deletes": 257, "total_data_size": 2136129, "memory_usage": 2171560, "flush_reason": "Manual Compaction"}
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906256469, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2102066, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24729, "largest_seqno": 26245, "table_properties": {"data_size": 2094699, "index_size": 4184, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17657, "raw_average_key_size": 21, "raw_value_size": 2079122, "raw_average_value_size": 2514, "num_data_blocks": 186, "num_entries": 827, "num_filter_entries": 827, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394820, "oldest_key_time": 1764394820, "file_creation_time": 1764394906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 18635 microseconds, and 10384 cpu microseconds.
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.256529) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2102066 bytes OK
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.256553) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.258429) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.258451) EVENT_LOG_v1 {"time_micros": 1764394906258444, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.258475) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2128889, prev total WAL file size 2128889, number of live WAL files 2.
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.259709) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2052KB)], [56(9707KB)]
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906259763, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12042553, "oldest_snapshot_seqno": -1}
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5622 keys, 10213474 bytes, temperature: kUnknown
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906347476, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 10213474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10172030, "index_size": 26294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 140044, "raw_average_key_size": 24, "raw_value_size": 10067339, "raw_average_value_size": 1790, "num_data_blocks": 1092, "num_entries": 5622, "num_filter_entries": 5622, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764394906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.347805) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 10213474 bytes
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.349495) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.1 rd, 116.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 6154, records dropped: 532 output_compression: NoCompression
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.349518) EVENT_LOG_v1 {"time_micros": 1764394906349506, "job": 30, "event": "compaction_finished", "compaction_time_micros": 87842, "compaction_time_cpu_micros": 45060, "output_level": 6, "num_output_files": 1, "total_output_size": 10213474, "num_input_records": 6154, "num_output_records": 5622, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906350130, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764394906352558, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.259583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.352660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.352667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.352670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.352672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:41:46 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:41:46.352675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:41:47 compute-0 ceph-mon[75176]: pgmap v1146: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 78 KiB/s wr, 5 op/s
Nov 29 05:41:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 78 KiB/s wr, 5 op/s
Nov 29 05:41:48 compute-0 ceph-mon[75176]: pgmap v1147: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 716 B/s rd, 78 KiB/s wr, 5 op/s
Nov 29 05:41:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 3 op/s
Nov 29 05:41:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:51 compute-0 ceph-mon[75176]: pgmap v1148: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 3 op/s
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.000447614992348766 of space, bias 4.0, pg target 0.5371379908185192 quantized to 16 (current 16)
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:41:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:41:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 32 KiB/s wr, 2 op/s
Nov 29 05:41:52 compute-0 ceph-mon[75176]: pgmap v1149: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 357 B/s rd, 32 KiB/s wr, 2 op/s
Nov 29 05:41:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 31 KiB/s wr, 2 op/s
Nov 29 05:41:55 compute-0 ceph-mon[75176]: pgmap v1150: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 31 KiB/s wr, 2 op/s
Nov 29 05:41:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:41:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 35 KiB/s wr, 2 op/s
Nov 29 05:41:56 compute-0 sshd-session[271796]: Received disconnect from 80.94.93.233 port 51788:11:  [preauth]
Nov 29 05:41:56 compute-0 sshd-session[271796]: Disconnected from authenticating user root 80.94.93.233 port 51788 [preauth]
Nov 29 05:41:57 compute-0 ceph-mon[75176]: pgmap v1151: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 35 KiB/s wr, 2 op/s
Nov 29 05:41:57 compute-0 sshd-session[271798]: Invalid user casaos from 152.32.145.111 port 44040
Nov 29 05:41:58 compute-0 podman[271800]: 2025-11-29 05:41:58.012538221 +0000 UTC m=+0.061020522 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 05:41:58 compute-0 sshd-session[271798]: Received disconnect from 152.32.145.111 port 44040:11: Bye Bye [preauth]
Nov 29 05:41:58 compute-0 sshd-session[271798]: Disconnected from invalid user casaos 152.32.145.111 port 44040 [preauth]
Nov 29 05:41:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 10 KiB/s wr, 0 op/s
Nov 29 05:41:59 compute-0 ceph-mon[75176]: pgmap v1152: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 10 KiB/s wr, 0 op/s
Nov 29 05:42:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 10 KiB/s wr, 0 op/s
Nov 29 05:42:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:01 compute-0 ceph-mon[75176]: pgmap v1153: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 10 KiB/s wr, 0 op/s
Nov 29 05:42:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s wr, 0 op/s
Nov 29 05:42:03 compute-0 ceph-mon[75176]: pgmap v1154: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s wr, 0 op/s
Nov 29 05:42:04 compute-0 podman[271821]: 2025-11-29 05:42:04.068187103 +0000 UTC m=+0.106237681 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 05:42:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s wr, 0 op/s
Nov 29 05:42:05 compute-0 ceph-mon[75176]: pgmap v1155: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s wr, 0 op/s
Nov 29 05:42:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s wr, 0 op/s
Nov 29 05:42:07 compute-0 ceph-mon[75176]: pgmap v1156: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s wr, 0 op/s
Nov 29 05:42:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:08 compute-0 ceph-mon[75176]: pgmap v1157: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:11 compute-0 ceph-mon[75176]: pgmap v1158: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:42:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:42:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:42:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:42:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:42:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:42:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:13 compute-0 ceph-mon[75176]: pgmap v1159: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:42:13.758 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:42:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:42:13.758 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:42:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:42:13.759 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:42:14 compute-0 podman[271847]: 2025-11-29 05:42:14.002402666 +0000 UTC m=+0.052500676 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 05:42:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:42:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4120545269' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:42:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:42:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4120545269' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:42:15 compute-0 ceph-mon[75176]: pgmap v1160: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/4120545269' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:42:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/4120545269' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:42:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:16 compute-0 ceph-mon[75176]: pgmap v1161: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:19 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 05:42:19 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:42:19 compute-0 ceph-mon[75176]: pgmap v1162: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:20 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 05:42:20 compute-0 sshd-session[271866]: Invalid user zmarin from 45.120.216.232 port 59258
Nov 29 05:42:20 compute-0 sshd-session[271866]: Received disconnect from 45.120.216.232 port 59258:11: Bye Bye [preauth]
Nov 29 05:42:20 compute-0 sshd-session[271866]: Disconnected from invalid user zmarin 45.120.216.232 port 59258 [preauth]
Nov 29 05:42:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:21 compute-0 ceph-mon[75176]: pgmap v1163: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:42:22 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 05:42:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 05:42:22 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 05:42:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:42:22 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:42:23 compute-0 ceph-mon[75176]: pgmap v1164: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:23 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:42:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:24 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 05:42:25 compute-0 ceph-mon[75176]: pgmap v1165: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:42:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.7 KiB/s wr, 0 op/s
Nov 29 05:42:26 compute-0 ceph-mon[75176]: pgmap v1166: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.7 KiB/s wr, 0 op/s
Nov 29 05:42:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:42:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 05:42:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4d6476ad-1951-44f5-839b-0b3b554d9116/.meta.tmp'
Nov 29 05:42:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4d6476ad-1951-44f5-839b-0b3b554d9116/.meta.tmp' to config b'/volumes/_nogroup/4d6476ad-1951-44f5-839b-0b3b554d9116/.meta'
Nov 29 05:42:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 05:42:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "format": "json"}]: dispatch
Nov 29 05:42:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 05:42:26 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 05:42:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:42:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:42:27 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:42:27 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "format": "json"}]: dispatch
Nov 29 05:42:27 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:42:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.7 KiB/s wr, 0 op/s
Nov 29 05:42:28 compute-0 ceph-mon[75176]: pgmap v1167: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 5.7 KiB/s wr, 0 op/s
Nov 29 05:42:29 compute-0 podman[271868]: 2025-11-29 05:42:29.042728741 +0000 UTC m=+0.084920797 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 05:42:29 compute-0 sudo[271888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:42:29 compute-0 sudo[271888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:29 compute-0 sudo[271888]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:29 compute-0 sudo[271913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:42:29 compute-0 sudo[271913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:29 compute-0 sudo[271913]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:29 compute-0 sudo[271938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:42:29 compute-0 sudo[271938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:29 compute-0 sudo[271938]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:30 compute-0 sudo[271963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:42:30 compute-0 sudo[271963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 17 KiB/s wr, 1 op/s
Nov 29 05:42:30 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 29 05:42:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 05:42:30 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 05:42:30 compute-0 sudo[271963]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:42:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:42:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:42:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:42:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:42:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:42:30 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 98b4dba3-f3de-4000-807e-1d794b2848c4 does not exist
Nov 29 05:42:30 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 57f9961f-c742-4a5d-9361-45f1f9e0fabc does not exist
Nov 29 05:42:30 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 19791311-c2a5-4edd-8093-d62722ec746e does not exist
Nov 29 05:42:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:42:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:42:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:42:30 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:42:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:42:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:42:30 compute-0 sudo[272020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:42:30 compute-0 sudo[272020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:30 compute-0 sudo[272020]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:30 compute-0 sudo[272045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:42:30 compute-0 sudo[272045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:30 compute-0 sudo[272045]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:30 compute-0 sudo[272070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:42:30 compute-0 sudo[272070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:30 compute-0 sudo[272070]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:30 compute-0 sudo[272095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:42:30 compute-0 sudo[272095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:31 compute-0 ceph-mon[75176]: pgmap v1168: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 17 KiB/s wr, 1 op/s
Nov 29 05:42:31 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "new_size": 2147483648, "format": "json"}]: dispatch
Nov 29 05:42:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:42:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:42:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:42:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:42:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:42:31 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:42:31 compute-0 podman[272161]: 2025-11-29 05:42:31.345078072 +0000 UTC m=+0.048487610 container create 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:42:31 compute-0 systemd[1]: Started libpod-conmon-02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77.scope.
Nov 29 05:42:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:42:31 compute-0 podman[272161]: 2025-11-29 05:42:31.321686378 +0000 UTC m=+0.025095926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:42:31 compute-0 podman[272161]: 2025-11-29 05:42:31.422617291 +0000 UTC m=+0.126026809 container init 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:42:31 compute-0 podman[272161]: 2025-11-29 05:42:31.431374141 +0000 UTC m=+0.134783629 container start 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 05:42:31 compute-0 podman[272161]: 2025-11-29 05:42:31.434530617 +0000 UTC m=+0.137940135 container attach 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 05:42:31 compute-0 lucid_fermi[272177]: 167 167
Nov 29 05:42:31 compute-0 systemd[1]: libpod-02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77.scope: Deactivated successfully.
Nov 29 05:42:31 compute-0 podman[272161]: 2025-11-29 05:42:31.441233569 +0000 UTC m=+0.144643107 container died 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 05:42:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c84975bf8393862dae94e172a3535358cf593a08818aeeb866fb749f168563c8-merged.mount: Deactivated successfully.
Nov 29 05:42:31 compute-0 podman[272161]: 2025-11-29 05:42:31.491452889 +0000 UTC m=+0.194862387 container remove 02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_fermi, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:42:31 compute-0 systemd[1]: libpod-conmon-02279c7429821772e4c1da99f35fc8a28a5c2b07e728ae145e8cc25e09678c77.scope: Deactivated successfully.
Nov 29 05:42:31 compute-0 podman[272200]: 2025-11-29 05:42:31.664377808 +0000 UTC m=+0.042053415 container create c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:42:31 compute-0 systemd[1]: Started libpod-conmon-c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0.scope.
Nov 29 05:42:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:31 compute-0 podman[272200]: 2025-11-29 05:42:31.646588519 +0000 UTC m=+0.024264146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:42:31 compute-0 podman[272200]: 2025-11-29 05:42:31.748976866 +0000 UTC m=+0.126652493 container init c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:42:31 compute-0 podman[272200]: 2025-11-29 05:42:31.755553365 +0000 UTC m=+0.133228972 container start c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:42:31 compute-0 podman[272200]: 2025-11-29 05:42:31.758759262 +0000 UTC m=+0.136434869 container attach c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:42:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 17 KiB/s wr, 1 op/s
Nov 29 05:42:32 compute-0 beautiful_booth[272216]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:42:32 compute-0 beautiful_booth[272216]: --> relative data size: 1.0
Nov 29 05:42:32 compute-0 beautiful_booth[272216]: --> All data devices are unavailable
Nov 29 05:42:32 compute-0 systemd[1]: libpod-c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0.scope: Deactivated successfully.
Nov 29 05:42:32 compute-0 podman[272200]: 2025-11-29 05:42:32.756032148 +0000 UTC m=+1.133707755 container died c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:42:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-082cef2138f6e358b836a6ef3b1f787b769876b1cc53ffa90f4d45b4841c9246-merged.mount: Deactivated successfully.
Nov 29 05:42:32 compute-0 podman[272200]: 2025-11-29 05:42:32.808457692 +0000 UTC m=+1.186133299 container remove c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 05:42:32 compute-0 systemd[1]: libpod-conmon-c04fdf3504b568b14302bd58c19257f64f3e2848fe2d82d4da4db7e2da0325e0.scope: Deactivated successfully.
Nov 29 05:42:32 compute-0 sudo[272095]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:32 compute-0 sudo[272259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:42:32 compute-0 sudo[272259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:32 compute-0 sudo[272259]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:32 compute-0 sudo[272284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:42:32 compute-0 sudo[272284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:32 compute-0 sudo[272284]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:32 compute-0 sudo[272309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:42:33 compute-0 sudo[272309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:33 compute-0 sudo[272309]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:33 compute-0 sudo[272334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:42:33 compute-0 sudo[272334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:33 compute-0 ceph-mon[75176]: pgmap v1169: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 17 KiB/s wr, 1 op/s
Nov 29 05:42:33 compute-0 podman[272402]: 2025-11-29 05:42:33.338954279 +0000 UTC m=+0.037788872 container create 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:42:33 compute-0 systemd[1]: Started libpod-conmon-307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb.scope.
Nov 29 05:42:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:42:33 compute-0 podman[272402]: 2025-11-29 05:42:33.414644123 +0000 UTC m=+0.113478816 container init 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:42:33 compute-0 podman[272402]: 2025-11-29 05:42:33.321641921 +0000 UTC m=+0.020476524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:42:33 compute-0 podman[272402]: 2025-11-29 05:42:33.420924064 +0000 UTC m=+0.119758667 container start 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 05:42:33 compute-0 podman[272402]: 2025-11-29 05:42:33.427592825 +0000 UTC m=+0.126427438 container attach 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:42:33 compute-0 vigilant_lederberg[272418]: 167 167
Nov 29 05:42:33 compute-0 systemd[1]: libpod-307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb.scope: Deactivated successfully.
Nov 29 05:42:33 compute-0 podman[272402]: 2025-11-29 05:42:33.430050174 +0000 UTC m=+0.128884777 container died 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:42:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7e4c9e0de5ab84ad49616b98bfbf65a565e3b51793149d219e188e0cdda53bb-merged.mount: Deactivated successfully.
Nov 29 05:42:33 compute-0 podman[272402]: 2025-11-29 05:42:33.462578438 +0000 UTC m=+0.161413031 container remove 307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:42:33 compute-0 systemd[1]: libpod-conmon-307d18e12b88cd95ac90b69ab11cf5b1fc7c646bc69abcf67dd9e5fe93de5dfb.scope: Deactivated successfully.
Nov 29 05:42:33 compute-0 podman[272442]: 2025-11-29 05:42:33.649307568 +0000 UTC m=+0.034364809 container create a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:42:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "format": "json"}]: dispatch
Nov 29 05:42:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4d6476ad-1951-44f5-839b-0b3b554d9116, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:42:33 compute-0 systemd[1]: Started libpod-conmon-a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821.scope.
Nov 29 05:42:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4d6476ad-1951-44f5-839b-0b3b554d9116, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:42:33 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d6476ad-1951-44f5-839b-0b3b554d9116' of type subvolume
Nov 29 05:42:33 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:42:33.686+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d6476ad-1951-44f5-839b-0b3b554d9116' of type subvolume
Nov 29 05:42:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 05:42:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4d6476ad-1951-44f5-839b-0b3b554d9116'' moved to trashcan
Nov 29 05:42:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:42:33 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d6476ad-1951-44f5-839b-0b3b554d9116, vol_name:cephfs) < ""
Nov 29 05:42:33 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982b4caf8fab403ccd3d0526da11c11ea24b9465fc1c75fc619effd7fb550c51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982b4caf8fab403ccd3d0526da11c11ea24b9465fc1c75fc619effd7fb550c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982b4caf8fab403ccd3d0526da11c11ea24b9465fc1c75fc619effd7fb550c51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982b4caf8fab403ccd3d0526da11c11ea24b9465fc1c75fc619effd7fb550c51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:33 compute-0 podman[272442]: 2025-11-29 05:42:33.728014085 +0000 UTC m=+0.113071326 container init a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:42:33 compute-0 podman[272442]: 2025-11-29 05:42:33.634313637 +0000 UTC m=+0.019370898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:42:33 compute-0 podman[272442]: 2025-11-29 05:42:33.7344286 +0000 UTC m=+0.119485841 container start a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:42:33 compute-0 podman[272442]: 2025-11-29 05:42:33.737288279 +0000 UTC m=+0.122345520 container attach a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:42:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 17 KiB/s wr, 1 op/s
Nov 29 05:42:34 compute-0 epic_rhodes[272458]: {
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:     "0": [
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:         {
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "devices": [
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "/dev/loop3"
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             ],
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_name": "ceph_lv0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_size": "21470642176",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "name": "ceph_lv0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "tags": {
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.cluster_name": "ceph",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.crush_device_class": "",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.encrypted": "0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.osd_id": "0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.type": "block",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.vdo": "0"
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             },
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "type": "block",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "vg_name": "ceph_vg0"
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:         }
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:     ],
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:     "1": [
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:         {
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "devices": [
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "/dev/loop4"
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             ],
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_name": "ceph_lv1",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_size": "21470642176",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "name": "ceph_lv1",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "tags": {
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.cluster_name": "ceph",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.crush_device_class": "",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.encrypted": "0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.osd_id": "1",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.type": "block",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.vdo": "0"
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             },
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "type": "block",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "vg_name": "ceph_vg1"
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:         }
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:     ],
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:     "2": [
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:         {
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "devices": [
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "/dev/loop5"
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             ],
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_name": "ceph_lv2",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_size": "21470642176",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "name": "ceph_lv2",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "tags": {
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.cluster_name": "ceph",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.crush_device_class": "",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.encrypted": "0",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.osd_id": "2",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.type": "block",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:                 "ceph.vdo": "0"
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             },
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "type": "block",
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:             "vg_name": "ceph_vg2"
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:         }
Nov 29 05:42:34 compute-0 epic_rhodes[272458]:     ]
Nov 29 05:42:34 compute-0 epic_rhodes[272458]: }
Nov 29 05:42:34 compute-0 systemd[1]: libpod-a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821.scope: Deactivated successfully.
Nov 29 05:42:34 compute-0 podman[272442]: 2025-11-29 05:42:34.448740446 +0000 UTC m=+0.833797697 container died a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:42:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-982b4caf8fab403ccd3d0526da11c11ea24b9465fc1c75fc619effd7fb550c51-merged.mount: Deactivated successfully.
Nov 29 05:42:34 compute-0 podman[272442]: 2025-11-29 05:42:34.502440971 +0000 UTC m=+0.887498212 container remove a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rhodes, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:42:34 compute-0 systemd[1]: libpod-conmon-a2a89702e827d833b1534bdff4eb7e1824cd4ef666074e1da21a74bb60a6d821.scope: Deactivated successfully.
Nov 29 05:42:34 compute-0 sudo[272334]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:34 compute-0 sudo[272501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:42:34 compute-0 sudo[272501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:34 compute-0 sudo[272501]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:34 compute-0 podman[272468]: 2025-11-29 05:42:34.616857387 +0000 UTC m=+0.137978985 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:42:34 compute-0 sudo[272530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:42:34 compute-0 sudo[272530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:34 compute-0 sudo[272530]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:34 compute-0 sudo[272556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:42:34 compute-0 sudo[272556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:34 compute-0 sudo[272556]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:34 compute-0 sudo[272581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:42:34 compute-0 sudo[272581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:35 compute-0 podman[272647]: 2025-11-29 05:42:35.07583057 +0000 UTC m=+0.040459467 container create 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:42:35 compute-0 systemd[1]: Started libpod-conmon-88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763.scope.
Nov 29 05:42:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:42:35 compute-0 podman[272647]: 2025-11-29 05:42:35.148119622 +0000 UTC m=+0.112748549 container init 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:42:35 compute-0 podman[272647]: 2025-11-29 05:42:35.056105894 +0000 UTC m=+0.020734841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:42:35 compute-0 podman[272647]: 2025-11-29 05:42:35.154543057 +0000 UTC m=+0.119171994 container start 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:42:35 compute-0 podman[272647]: 2025-11-29 05:42:35.158163724 +0000 UTC m=+0.122792641 container attach 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:42:35 compute-0 reverent_driscoll[272663]: 167 167
Nov 29 05:42:35 compute-0 podman[272647]: 2025-11-29 05:42:35.159923506 +0000 UTC m=+0.124552403 container died 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:42:35 compute-0 systemd[1]: libpod-88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763.scope: Deactivated successfully.
Nov 29 05:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-355964eda43185c0df3277e66a3c4e5ae455f47ecddfe8ef9269a13a08ad6541-merged.mount: Deactivated successfully.
Nov 29 05:42:35 compute-0 podman[272647]: 2025-11-29 05:42:35.189697534 +0000 UTC m=+0.154326431 container remove 88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:42:35 compute-0 systemd[1]: libpod-conmon-88897adb2c935a18514845f20628847ff94c1cbb3d9dcbd5bd169c0dfc6f0763.scope: Deactivated successfully.
Nov 29 05:42:35 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "format": "json"}]: dispatch
Nov 29 05:42:35 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4d6476ad-1951-44f5-839b-0b3b554d9116", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:35 compute-0 ceph-mon[75176]: pgmap v1170: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 17 KiB/s wr, 1 op/s
Nov 29 05:42:35 compute-0 podman[272685]: 2025-11-29 05:42:35.362912529 +0000 UTC m=+0.036653464 container create 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:42:35 compute-0 systemd[1]: Started libpod-conmon-900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9.scope.
Nov 29 05:42:35 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a6d84bb135c275d801fce9632a031fb83d6843f112ef0b30cba6f25d562a7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a6d84bb135c275d801fce9632a031fb83d6843f112ef0b30cba6f25d562a7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a6d84bb135c275d801fce9632a031fb83d6843f112ef0b30cba6f25d562a7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a6d84bb135c275d801fce9632a031fb83d6843f112ef0b30cba6f25d562a7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:42:35 compute-0 podman[272685]: 2025-11-29 05:42:35.429158535 +0000 UTC m=+0.102899500 container init 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:42:35 compute-0 podman[272685]: 2025-11-29 05:42:35.438100361 +0000 UTC m=+0.111841306 container start 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:42:35 compute-0 podman[272685]: 2025-11-29 05:42:35.440905109 +0000 UTC m=+0.114646054 container attach 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 29 05:42:35 compute-0 podman[272685]: 2025-11-29 05:42:35.347956108 +0000 UTC m=+0.021697073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:42:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:35 compute-0 nova_compute[254898]: 2025-11-29 05:42:35.961 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:42:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 44 KiB/s wr, 2 op/s
Nov 29 05:42:36 compute-0 elastic_babbage[272701]: {
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "osd_id": 0,
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "type": "bluestore"
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:     },
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "osd_id": 1,
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "type": "bluestore"
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:     },
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "osd_id": 2,
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:         "type": "bluestore"
Nov 29 05:42:36 compute-0 elastic_babbage[272701]:     }
Nov 29 05:42:36 compute-0 elastic_babbage[272701]: }
Nov 29 05:42:36 compute-0 systemd[1]: libpod-900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9.scope: Deactivated successfully.
Nov 29 05:42:36 compute-0 podman[272734]: 2025-11-29 05:42:36.360514603 +0000 UTC m=+0.021469289 container died 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 05:42:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-20a6d84bb135c275d801fce9632a031fb83d6843f112ef0b30cba6f25d562a7d-merged.mount: Deactivated successfully.
Nov 29 05:42:36 compute-0 podman[272734]: 2025-11-29 05:42:36.409007171 +0000 UTC m=+0.069961837 container remove 900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:42:36 compute-0 systemd[1]: libpod-conmon-900f2b9f7df7e65e378f4db22b77c376588e579d415cbde290c2e045805d22b9.scope: Deactivated successfully.
Nov 29 05:42:36 compute-0 sudo[272581]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:42:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:42:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:42:36 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:42:36 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 95e72e6f-30ba-423b-bad3-6ea9c7019ab8 does not exist
Nov 29 05:42:36 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a53babfb-a8c2-4810-bfa0-ffe1d4e68eb3 does not exist
Nov 29 05:42:36 compute-0 sudo[272749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:42:36 compute-0 sudo[272749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:36 compute-0 sudo[272749]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:36 compute-0 sudo[272774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:42:36 compute-0 sudo[272774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:42:36 compute-0 sudo[272774]: pam_unix(sudo:session): session closed for user root
Nov 29 05:42:36 compute-0 ceph-mon[75176]: pgmap v1171: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 44 KiB/s wr, 2 op/s
Nov 29 05:42:36 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:42:36 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:42:36 compute-0 nova_compute[254898]: 2025-11-29 05:42:36.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:42:37 compute-0 nova_compute[254898]: 2025-11-29 05:42:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:42:37 compute-0 nova_compute[254898]: 2025-11-29 05:42:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:42:37 compute-0 nova_compute[254898]: 2025-11-29 05:42:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:42:37 compute-0 nova_compute[254898]: 2025-11-29 05:42:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:42:37 compute-0 nova_compute[254898]: 2025-11-29 05:42:37.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:42:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s wr, 1 op/s
Nov 29 05:42:38 compute-0 ceph-mon[75176]: pgmap v1172: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s wr, 1 op/s
Nov 29 05:42:38 compute-0 nova_compute[254898]: 2025-11-29 05:42:38.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:42:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 2 op/s
Nov 29 05:42:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:40 compute-0 nova_compute[254898]: 2025-11-29 05:42:40.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:42:40 compute-0 nova_compute[254898]: 2025-11-29 05:42:40.978 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:42:40 compute-0 nova_compute[254898]: 2025-11-29 05:42:40.979 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:42:40 compute-0 nova_compute[254898]: 2025-11-29 05:42:40.979 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:42:40 compute-0 nova_compute[254898]: 2025-11-29 05:42:40.980 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:42:40 compute-0 nova_compute[254898]: 2025-11-29 05:42:40.980 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:42:41 compute-0 ceph-mon[75176]: pgmap v1173: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 2 op/s
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:42:41
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:42:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:42:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857270262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:42:41 compute-0 nova_compute[254898]: 2025-11-29 05:42:41.394 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f97ef850>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f96d7fa0>)]
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 05:42:41 compute-0 nova_compute[254898]: 2025-11-29 05:42:41.526 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:42:41 compute-0 nova_compute[254898]: 2025-11-29 05:42:41.527 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5036MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:42:41 compute-0 nova_compute[254898]: 2025-11-29 05:42:41.527 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:42:41 compute-0 nova_compute[254898]: 2025-11-29 05:42:41.528 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:42:41 compute-0 nova_compute[254898]: 2025-11-29 05:42:41.593 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:42:41 compute-0 nova_compute[254898]: 2025-11-29 05:42:41.593 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:42:41 compute-0 nova_compute[254898]: 2025-11-29 05:42:41.615 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/98efc0d9-c20a-4e7b-a016-a71069116a97/.meta.tmp'
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98efc0d9-c20a-4e7b-a016-a71069116a97/.meta.tmp' to config b'/volumes/_nogroup/98efc0d9-c20a-4e7b-a016-a71069116a97/.meta'
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "format": "json"}]: dispatch
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 05:42:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 05:42:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:42:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:42:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 05:42:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:42:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211080453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:42:42 compute-0 nova_compute[254898]: 2025-11-29 05:42:42.030 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:42:42 compute-0 nova_compute[254898]: 2025-11-29 05:42:42.035 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:42:42 compute-0 nova_compute[254898]: 2025-11-29 05:42:42.050 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:42:42 compute-0 nova_compute[254898]: 2025-11-29 05:42:42.051 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:42:42 compute-0 nova_compute[254898]: 2025-11-29 05:42:42.052 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:42:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 1 op/s
Nov 29 05:42:42 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:42:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp'
Nov 29 05:42:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp' to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta'
Nov 29 05:42:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:42 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "format": "json"}]: dispatch
Nov 29 05:42:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:42 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2857270262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:42:42 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:42:42 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1211080453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:42:42 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:42:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:42:43 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:42:43 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "format": "json"}]: dispatch
Nov 29 05:42:43 compute-0 ceph-mon[75176]: pgmap v1174: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 1 op/s
Nov 29 05:42:43 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:42:43 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "format": "json"}]: dispatch
Nov 29 05:42:43 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:42:43 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.csskcz(active, since 34m)
Nov 29 05:42:44 compute-0 nova_compute[254898]: 2025-11-29 05:42:44.050 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:42:44 compute-0 nova_compute[254898]: 2025-11-29 05:42:44.050 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:42:44 compute-0 nova_compute[254898]: 2025-11-29 05:42:44.050 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:42:44 compute-0 nova_compute[254898]: 2025-11-29 05:42:44.051 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:42:44 compute-0 nova_compute[254898]: 2025-11-29 05:42:44.093 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:42:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 1 op/s
Nov 29 05:42:44 compute-0 ceph-mon[75176]: mgrmap e18: compute-0.csskcz(active, since 34m)
Nov 29 05:42:44 compute-0 podman[272843]: 2025-11-29 05:42:44.99874577 +0000 UTC m=+0.052123127 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 05:42:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Nov 29 05:42:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 05:42:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 05:42:45 compute-0 ceph-mon[75176]: pgmap v1175: 305 pgs: 305 active+clean; 69 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 29 KiB/s wr, 1 op/s
Nov 29 05:42:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "snap_name": "05eea654-051b-4823-b7e8-43654092acb8", "format": "json"}]: dispatch
Nov 29 05:42:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:05eea654-051b-4823-b7e8-43654092acb8, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:05eea654-051b-4823-b7e8-43654092acb8, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 58 KiB/s wr, 3 op/s
Nov 29 05:42:46 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Nov 29 05:42:47 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "snap_name": "05eea654-051b-4823-b7e8-43654092acb8", "format": "json"}]: dispatch
Nov 29 05:42:47 compute-0 ceph-mon[75176]: pgmap v1176: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 58 KiB/s wr, 3 op/s
Nov 29 05:42:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 31 KiB/s wr, 2 op/s
Nov 29 05:42:48 compute-0 ceph-mon[75176]: pgmap v1177: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 31 KiB/s wr, 2 op/s
Nov 29 05:42:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "format": "json"}]: dispatch
Nov 29 05:42:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98efc0d9-c20a-4e7b-a016-a71069116a97, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:42:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98efc0d9-c20a-4e7b-a016-a71069116a97, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:42:48 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:42:48.621+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98efc0d9-c20a-4e7b-a016-a71069116a97' of type subvolume
Nov 29 05:42:48 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98efc0d9-c20a-4e7b-a016-a71069116a97' of type subvolume
Nov 29 05:42:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 05:42:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/98efc0d9-c20a-4e7b-a016-a71069116a97'' moved to trashcan
Nov 29 05:42:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:42:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98efc0d9-c20a-4e7b-a016-a71069116a97, vol_name:cephfs) < ""
Nov 29 05:42:49 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "format": "json"}]: dispatch
Nov 29 05:42:49 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "98efc0d9-c20a-4e7b-a016-a71069116a97", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 3 op/s
Nov 29 05:42:50 compute-0 ceph-mon[75176]: pgmap v1178: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 52 KiB/s wr, 3 op/s
Nov 29 05:42:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "snap_name": "05eea654-051b-4823-b7e8-43654092acb8_b2c2a3a9-6ca2-47e7-866b-066e22d44cab", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05eea654-051b-4823-b7e8-43654092acb8_b2c2a3a9-6ca2-47e7-866b-066e22d44cab, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp'
Nov 29 05:42:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp' to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta'
Nov 29 05:42:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05eea654-051b-4823-b7e8-43654092acb8_b2c2a3a9-6ca2-47e7-866b-066e22d44cab, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:50 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "snap_name": "05eea654-051b-4823-b7e8-43654092acb8", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05eea654-051b-4823-b7e8-43654092acb8, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp'
Nov 29 05:42:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta.tmp' to config b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6/.meta'
Nov 29 05:42:50 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:05eea654-051b-4823-b7e8-43654092acb8, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:51 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "snap_name": "05eea654-051b-4823-b7e8-43654092acb8_b2c2a3a9-6ca2-47e7-866b-066e22d44cab", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:51 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "snap_name": "05eea654-051b-4823-b7e8-43654092acb8", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0004662470697401836 of space, bias 4.0, pg target 0.5594964836882204 quantized to 16 (current 16)
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:42:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:42:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:42:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 05:42:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d1e617bb-f4ed-4cc8-b966-2d95665d32f0/.meta.tmp'
Nov 29 05:42:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d1e617bb-f4ed-4cc8-b966-2d95665d32f0/.meta.tmp' to config b'/volumes/_nogroup/d1e617bb-f4ed-4cc8-b966-2d95665d32f0/.meta'
Nov 29 05:42:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 05:42:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "format": "json"}]: dispatch
Nov 29 05:42:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 05:42:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s wr, 2 op/s
Nov 29 05:42:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 05:42:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:42:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:42:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:42:52 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "format": "json"}]: dispatch
Nov 29 05:42:52 compute-0 ceph-mon[75176]: pgmap v1179: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s wr, 2 op/s
Nov 29 05:42:52 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:42:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 29 05:42:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 29 05:42:53 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 29 05:42:53 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "format": "json"}]: dispatch
Nov 29 05:42:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:42:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:42:53 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:42:53.808+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb848b69-a318-4691-8a4b-5a72fc808dc6' of type subvolume
Nov 29 05:42:53 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb848b69-a318-4691-8a4b-5a72fc808dc6' of type subvolume
Nov 29 05:42:53 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fb848b69-a318-4691-8a4b-5a72fc808dc6'' moved to trashcan
Nov 29 05:42:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:42:53 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb848b69-a318-4691-8a4b-5a72fc808dc6, vol_name:cephfs) < ""
Nov 29 05:42:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s wr, 3 op/s
Nov 29 05:42:54 compute-0 ceph-mon[75176]: osdmap e164: 3 total, 3 up, 3 in
Nov 29 05:42:54 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "format": "json"}]: dispatch
Nov 29 05:42:54 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb848b69-a318-4691-8a4b-5a72fc808dc6", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:54 compute-0 ceph-mon[75176]: pgmap v1181: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s wr, 3 op/s
Nov 29 05:42:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:42:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 75 KiB/s wr, 4 op/s
Nov 29 05:42:56 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "format": "json"}]: dispatch
Nov 29 05:42:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:42:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:42:56 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:42:56.866+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd1e617bb-f4ed-4cc8-b966-2d95665d32f0' of type subvolume
Nov 29 05:42:56 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd1e617bb-f4ed-4cc8-b966-2d95665d32f0' of type subvolume
Nov 29 05:42:56 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 05:42:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d1e617bb-f4ed-4cc8-b966-2d95665d32f0'' moved to trashcan
Nov 29 05:42:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:42:56 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d1e617bb-f4ed-4cc8-b966-2d95665d32f0, vol_name:cephfs) < ""
Nov 29 05:42:57 compute-0 ceph-mon[75176]: pgmap v1182: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 75 KiB/s wr, 4 op/s
Nov 29 05:42:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 75 KiB/s wr, 4 op/s
Nov 29 05:42:59 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "format": "json"}]: dispatch
Nov 29 05:42:59 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d1e617bb-f4ed-4cc8-b966-2d95665d32f0", "force": true, "format": "json"}]: dispatch
Nov 29 05:42:59 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:42:59.804 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:42:59 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:42:59.805 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 05:42:59 compute-0 podman[272862]: 2025-11-29 05:42:59.998690845 +0000 UTC m=+0.053441768 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 05:43:00 compute-0 ceph-mon[75176]: pgmap v1183: 305 pgs: 305 active+clean; 70 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 75 KiB/s wr, 4 op/s
Nov 29 05:43:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 84 KiB/s wr, 5 op/s
Nov 29 05:43:00 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 05:43:00 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:43:00 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:43:00 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:00 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 05:43:00 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a4fbeb19-4b4a-408e-8a0f-278794e0aaab'' moved to trashcan
Nov 29 05:43:00 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:43:00 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a4fbeb19-4b4a-408e-8a0f-278794e0aaab, vol_name:cephfs) < ""
Nov 29 05:43:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:00 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:43:00.806 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:43:01 compute-0 ceph-mon[75176]: pgmap v1184: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 84 KiB/s wr, 5 op/s
Nov 29 05:43:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 84 KiB/s wr, 5 op/s
Nov 29 05:43:02 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "format": "json"}]: dispatch
Nov 29 05:43:02 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a4fbeb19-4b4a-408e-8a0f-278794e0aaab", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 765 B/s rd, 79 KiB/s wr, 5 op/s
Nov 29 05:43:05 compute-0 podman[272883]: 2025-11-29 05:43:05.069448269 +0000 UTC m=+0.111335835 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 05:43:05 compute-0 ceph-mon[75176]: pgmap v1185: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 84 KiB/s wr, 5 op/s
Nov 29 05:43:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 29 05:43:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2_dff066e4-ae85-4050-9c93-143b245e669b", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2_dff066e4-ae85-4050-9c93-143b245e669b, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:43:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 29 05:43:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 81 KiB/s wr, 5 op/s
Nov 29 05:43:06 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 29 05:43:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp'
Nov 29 05:43:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp' to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta'
Nov 29 05:43:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2_dff066e4-ae85-4050-9c93-143b245e669b, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:43:07 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:43:07 compute-0 ceph-mon[75176]: pgmap v1186: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 765 B/s rd, 79 KiB/s wr, 5 op/s
Nov 29 05:43:07 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2_dff066e4-ae85-4050-9c93-143b245e669b", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:07 compute-0 ceph-mon[75176]: pgmap v1187: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 81 KiB/s wr, 5 op/s
Nov 29 05:43:07 compute-0 ceph-mon[75176]: osdmap e165: 3 total, 3 up, 3 in
Nov 29 05:43:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp'
Nov 29 05:43:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta.tmp' to config b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa/.meta'
Nov 29 05:43:07 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f919bca8-f41c-47b0-8fca-f8f7988969c2, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:43:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 2 op/s
Nov 29 05:43:08 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "snap_name": "f919bca8-f41c-47b0-8fca-f8f7988969c2", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:09 compute-0 ceph-mon[75176]: pgmap v1189: 305 pgs: 305 active+clean; 71 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 48 KiB/s wr, 2 op/s
Nov 29 05:43:09 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "format": "json"}]: dispatch
Nov 29 05:43:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dca14011-a433-40d4-8754-3eaafbae5faa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:43:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dca14011-a433-40d4-8754-3eaafbae5faa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:43:09 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:43:09.781+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dca14011-a433-40d4-8754-3eaafbae5faa' of type subvolume
Nov 29 05:43:09 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'dca14011-a433-40d4-8754-3eaafbae5faa' of type subvolume
Nov 29 05:43:09 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:43:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dca14011-a433-40d4-8754-3eaafbae5faa'' moved to trashcan
Nov 29 05:43:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:43:09 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dca14011-a433-40d4-8754-3eaafbae5faa, vol_name:cephfs) < ""
Nov 29 05:43:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 3 op/s
Nov 29 05:43:10 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "format": "json"}]: dispatch
Nov 29 05:43:10 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dca14011-a433-40d4-8754-3eaafbae5faa", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:10 compute-0 ceph-mon[75176]: pgmap v1190: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 3 op/s
Nov 29 05:43:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:43:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:43:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:43:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:43:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:43:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:43:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 3 op/s
Nov 29 05:43:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 29 05:43:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 29 05:43:13 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 29 05:43:13 compute-0 ceph-mon[75176]: pgmap v1191: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 46 KiB/s wr, 3 op/s
Nov 29 05:43:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:43:13.759 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:43:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:43:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:43:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:43:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:43:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 41 KiB/s wr, 3 op/s
Nov 29 05:43:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:43:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112252909' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:43:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:43:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3112252909' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:43:14 compute-0 ceph-mon[75176]: osdmap e166: 3 total, 3 up, 3 in
Nov 29 05:43:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3112252909' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:43:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3112252909' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:43:15 compute-0 ceph-mon[75176]: pgmap v1193: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 41 KiB/s wr, 3 op/s
Nov 29 05:43:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:16 compute-0 podman[272910]: 2025-11-29 05:43:16.039646497 +0000 UTC m=+0.083578586 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 05:43:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 526 B/s rd, 63 KiB/s wr, 4 op/s
Nov 29 05:43:16 compute-0 ceph-mon[75176]: pgmap v1194: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 526 B/s rd, 63 KiB/s wr, 4 op/s
Nov 29 05:43:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 4 op/s
Nov 29 05:43:19 compute-0 ceph-mon[75176]: pgmap v1195: 305 pgs: 305 active+clean; 71 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 61 KiB/s wr, 4 op/s
Nov 29 05:43:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Nov 29 05:43:20 compute-0 ceph-mon[75176]: pgmap v1196: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Nov 29 05:43:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 29 05:43:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 29 05:43:20 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 29 05:43:21 compute-0 ceph-mon[75176]: osdmap e167: 3 total, 3 up, 3 in
Nov 29 05:43:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 228 B/s rd, 46 KiB/s wr, 2 op/s
Nov 29 05:43:22 compute-0 ceph-mon[75176]: pgmap v1198: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 228 B/s rd, 46 KiB/s wr, 2 op/s
Nov 29 05:43:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Nov 29 05:43:25 compute-0 ceph-mon[75176]: pgmap v1199: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 41 KiB/s wr, 2 op/s
Nov 29 05:43:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 29 05:43:27 compute-0 ceph-mon[75176]: pgmap v1200: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 29 05:43:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 29 05:43:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:43:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:43:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp'
Nov 29 05:43:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp' to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta'
Nov 29 05:43:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:43:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "format": "json"}]: dispatch
Nov 29 05:43:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:43:28 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:43:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:43:28 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:43:29 compute-0 ceph-mon[75176]: pgmap v1201: 305 pgs: 305 active+clean; 72 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 0 op/s
Nov 29 05:43:29 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:43:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Nov 29 05:43:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:43:30 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "format": "json"}]: dispatch
Nov 29 05:43:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:31 compute-0 podman[272929]: 2025-11-29 05:43:31.002021477 +0000 UTC m=+0.055776924 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:43:31 compute-0 ceph-mon[75176]: pgmap v1202: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s wr, 0 op/s
Nov 29 05:43:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "snap_name": "521373cc-7b10-441e-9ad4-a9f2f13df341", "format": "json"}]: dispatch
Nov 29 05:43:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:43:31 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:43:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Nov 29 05:43:32 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "snap_name": "521373cc-7b10-441e-9ad4-a9f2f13df341", "format": "json"}]: dispatch
Nov 29 05:43:32 compute-0 ceph-mon[75176]: pgmap v1203: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s wr, 0 op/s
Nov 29 05:43:33 compute-0 sshd-session[272949]: Invalid user jenkins from 45.120.216.232 port 58150
Nov 29 05:43:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Nov 29 05:43:34 compute-0 sshd-session[272949]: Received disconnect from 45.120.216.232 port 58150:11: Bye Bye [preauth]
Nov 29 05:43:34 compute-0 sshd-session[272949]: Disconnected from invalid user jenkins 45.120.216.232 port 58150 [preauth]
Nov 29 05:43:35 compute-0 sshd-session[272951]: Invalid user nominatim from 152.32.145.111 port 33854
Nov 29 05:43:35 compute-0 podman[272953]: 2025-11-29 05:43:35.22837018 +0000 UTC m=+0.063060841 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 05:43:35 compute-0 sshd-session[272951]: Received disconnect from 152.32.145.111 port 33854:11: Bye Bye [preauth]
Nov 29 05:43:35 compute-0 sshd-session[272951]: Disconnected from invalid user nominatim 152.32.145.111 port 33854 [preauth]
Nov 29 05:43:35 compute-0 ceph-mon[75176]: pgmap v1204: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s wr, 0 op/s
Nov 29 05:43:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Nov 29 05:43:36 compute-0 sudo[272980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:43:36 compute-0 sudo[272980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:36 compute-0 sudo[272980]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:36 compute-0 sudo[273005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:43:36 compute-0 sudo[273005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:36 compute-0 sudo[273005]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:36 compute-0 ceph-mon[75176]: pgmap v1205: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Nov 29 05:43:36 compute-0 sudo[273030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:43:36 compute-0 sudo[273030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:36 compute-0 sudo[273030]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:36 compute-0 sudo[273055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:43:36 compute-0 sudo[273055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:37 compute-0 sudo[273055]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:43:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:43:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:43:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:43:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:43:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:43:37 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 5b11db75-1ad1-41c5-9f18-aca8b756710d does not exist
Nov 29 05:43:37 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ac786599-6ce3-4b0d-b976-b6d9a865a6cf does not exist
Nov 29 05:43:37 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev c08f1189-f624-4a11-85d7-47614fc4d6ef does not exist
Nov 29 05:43:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:43:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:43:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:43:37 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:43:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:43:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:43:37 compute-0 sudo[273111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:43:37 compute-0 sudo[273111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:37 compute-0 sudo[273111]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:37 compute-0 sudo[273136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:43:37 compute-0 sudo[273136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:37 compute-0 sudo[273136]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:37 compute-0 sudo[273161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:43:37 compute-0 sudo[273161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:37 compute-0 sudo[273161]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:37 compute-0 sudo[273186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:43:37 compute-0 sudo[273186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:43:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 05:43:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:43:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:43:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:43:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:43:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:43:37 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:43:37 compute-0 podman[273253]: 2025-11-29 05:43:37.803665029 +0000 UTC m=+0.046274157 container create a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:43:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a2d6206-4f8b-4475-a6b1-28b365cca976/.meta.tmp'
Nov 29 05:43:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a2d6206-4f8b-4475-a6b1-28b365cca976/.meta.tmp' to config b'/volumes/_nogroup/7a2d6206-4f8b-4475-a6b1-28b365cca976/.meta'
Nov 29 05:43:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 05:43:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "format": "json"}]: dispatch
Nov 29 05:43:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 05:43:37 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 05:43:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:43:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:43:37 compute-0 systemd[1]: Started libpod-conmon-a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4.scope.
Nov 29 05:43:37 compute-0 podman[273253]: 2025-11-29 05:43:37.782770885 +0000 UTC m=+0.025380103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:43:37 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:43:37 compute-0 podman[273253]: 2025-11-29 05:43:37.902926611 +0000 UTC m=+0.145535769 container init a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:43:37 compute-0 podman[273253]: 2025-11-29 05:43:37.911651531 +0000 UTC m=+0.154260669 container start a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:43:37 compute-0 podman[273253]: 2025-11-29 05:43:37.91489865 +0000 UTC m=+0.157507778 container attach a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:43:37 compute-0 heuristic_hamilton[273269]: 167 167
Nov 29 05:43:37 compute-0 systemd[1]: libpod-a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4.scope: Deactivated successfully.
Nov 29 05:43:37 compute-0 podman[273253]: 2025-11-29 05:43:37.919626174 +0000 UTC m=+0.162235302 container died a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:43:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a6a9898aee52f0f158b1dc6ded98f9833c88ec10adc41326f34214d63e96572-merged.mount: Deactivated successfully.
Nov 29 05:43:37 compute-0 nova_compute[254898]: 2025-11-29 05:43:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:37 compute-0 nova_compute[254898]: 2025-11-29 05:43:37.956 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:37 compute-0 podman[273253]: 2025-11-29 05:43:37.95681094 +0000 UTC m=+0.199420068 container remove a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hamilton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 05:43:37 compute-0 systemd[1]: libpod-conmon-a280d33ae527bd4e9532c871d8bbbaf740a1663319018243820288131e803fb4.scope: Deactivated successfully.
Nov 29 05:43:38 compute-0 podman[273293]: 2025-11-29 05:43:38.132719439 +0000 UTC m=+0.043259203 container create c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 05:43:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Nov 29 05:43:38 compute-0 systemd[1]: Started libpod-conmon-c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b.scope.
Nov 29 05:43:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:38 compute-0 podman[273293]: 2025-11-29 05:43:38.115490014 +0000 UTC m=+0.026029798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:43:38 compute-0 podman[273293]: 2025-11-29 05:43:38.210259149 +0000 UTC m=+0.120798963 container init c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:43:38 compute-0 podman[273293]: 2025-11-29 05:43:38.227640767 +0000 UTC m=+0.138180531 container start c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 05:43:38 compute-0 podman[273293]: 2025-11-29 05:43:38.230890275 +0000 UTC m=+0.141430029 container attach c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:43:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:43:38 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "format": "json"}]: dispatch
Nov 29 05:43:38 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:43:38 compute-0 ceph-mon[75176]: pgmap v1206: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s wr, 1 op/s
Nov 29 05:43:38 compute-0 nova_compute[254898]: 2025-11-29 05:43:38.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:38 compute-0 nova_compute[254898]: 2025-11-29 05:43:38.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:38 compute-0 nova_compute[254898]: 2025-11-29 05:43:38.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:38 compute-0 nova_compute[254898]: 2025-11-29 05:43:38.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:43:38 compute-0 nova_compute[254898]: 2025-11-29 05:43:38.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:38 compute-0 nova_compute[254898]: 2025-11-29 05:43:38.955 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 05:43:38 compute-0 nova_compute[254898]: 2025-11-29 05:43:38.970 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:39 compute-0 hardcore_bassi[273309]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:43:39 compute-0 hardcore_bassi[273309]: --> relative data size: 1.0
Nov 29 05:43:39 compute-0 hardcore_bassi[273309]: --> All data devices are unavailable
Nov 29 05:43:39 compute-0 systemd[1]: libpod-c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b.scope: Deactivated successfully.
Nov 29 05:43:39 compute-0 podman[273293]: 2025-11-29 05:43:39.275728228 +0000 UTC m=+1.186268022 container died c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-002eb492af3f55c4b457b66c19f0e344c476e7e9a8147dfa6ffc8a0d42d001a0-merged.mount: Deactivated successfully.
Nov 29 05:43:39 compute-0 podman[273293]: 2025-11-29 05:43:39.3347186 +0000 UTC m=+1.245258364 container remove c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bassi, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:43:39 compute-0 systemd[1]: libpod-conmon-c7903c8228c453eed97952e94fea0d95eaa4d8708223a31cca489bed109cf92b.scope: Deactivated successfully.
Nov 29 05:43:39 compute-0 sudo[273186]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:39 compute-0 sudo[273351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:43:39 compute-0 sudo[273351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:39 compute-0 sudo[273351]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:39 compute-0 sudo[273376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:43:39 compute-0 sudo[273376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:39 compute-0 sudo[273376]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:39 compute-0 sudo[273401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:43:39 compute-0 sudo[273401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:39 compute-0 sudo[273401]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:39 compute-0 sudo[273426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:43:39 compute-0 sudo[273426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:39 compute-0 nova_compute[254898]: 2025-11-29 05:43:39.990 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:40 compute-0 podman[273491]: 2025-11-29 05:43:40.035595782 +0000 UTC m=+0.038169911 container create 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:43:40 compute-0 systemd[1]: Started libpod-conmon-79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc.scope.
Nov 29 05:43:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:43:40 compute-0 podman[273491]: 2025-11-29 05:43:40.108328225 +0000 UTC m=+0.110902334 container init 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:43:40 compute-0 podman[273491]: 2025-11-29 05:43:40.017417853 +0000 UTC m=+0.019991962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:43:40 compute-0 podman[273491]: 2025-11-29 05:43:40.114621487 +0000 UTC m=+0.117195606 container start 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:43:40 compute-0 podman[273491]: 2025-11-29 05:43:40.11765422 +0000 UTC m=+0.120228379 container attach 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:43:40 compute-0 happy_keldysh[273507]: 167 167
Nov 29 05:43:40 compute-0 systemd[1]: libpod-79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc.scope: Deactivated successfully.
Nov 29 05:43:40 compute-0 podman[273491]: 2025-11-29 05:43:40.119543525 +0000 UTC m=+0.122117634 container died 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b545dc7f1fd1f8602b8cd8648f8518b3046cff54700882f5837176747cafb2c-merged.mount: Deactivated successfully.
Nov 29 05:43:40 compute-0 podman[273491]: 2025-11-29 05:43:40.15540145 +0000 UTC m=+0.157975559 container remove 79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keldysh, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 05:43:40 compute-0 systemd[1]: libpod-conmon-79a3ccb3c44b706f310ffc5a43fcc07d5301856ca38a617dbbf2594a573bd3cc.scope: Deactivated successfully.
Nov 29 05:43:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 2 op/s
Nov 29 05:43:40 compute-0 podman[273531]: 2025-11-29 05:43:40.298430977 +0000 UTC m=+0.037767682 container create 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 05:43:40 compute-0 systemd[1]: Started libpod-conmon-246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2.scope.
Nov 29 05:43:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca905bf47cbe4acd8085d99c2bd211d210a41ca2a6803a8b070b41df422b720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca905bf47cbe4acd8085d99c2bd211d210a41ca2a6803a8b070b41df422b720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca905bf47cbe4acd8085d99c2bd211d210a41ca2a6803a8b070b41df422b720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca905bf47cbe4acd8085d99c2bd211d210a41ca2a6803a8b070b41df422b720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:40 compute-0 podman[273531]: 2025-11-29 05:43:40.366557619 +0000 UTC m=+0.105894334 container init 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 05:43:40 compute-0 podman[273531]: 2025-11-29 05:43:40.374959471 +0000 UTC m=+0.114296176 container start 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:43:40 compute-0 podman[273531]: 2025-11-29 05:43:40.282410351 +0000 UTC m=+0.021747066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:43:40 compute-0 podman[273531]: 2025-11-29 05:43:40.378012965 +0000 UTC m=+0.117349670 container attach 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:43:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:40 compute-0 nova_compute[254898]: 2025-11-29 05:43:40.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:41 compute-0 confident_black[273546]: {
Nov 29 05:43:41 compute-0 confident_black[273546]:     "0": [
Nov 29 05:43:41 compute-0 confident_black[273546]:         {
Nov 29 05:43:41 compute-0 confident_black[273546]:             "devices": [
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "/dev/loop3"
Nov 29 05:43:41 compute-0 confident_black[273546]:             ],
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_name": "ceph_lv0",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_size": "21470642176",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "name": "ceph_lv0",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "tags": {
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.cluster_name": "ceph",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.crush_device_class": "",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.encrypted": "0",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.osd_id": "0",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.type": "block",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.vdo": "0"
Nov 29 05:43:41 compute-0 confident_black[273546]:             },
Nov 29 05:43:41 compute-0 confident_black[273546]:             "type": "block",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "vg_name": "ceph_vg0"
Nov 29 05:43:41 compute-0 confident_black[273546]:         }
Nov 29 05:43:41 compute-0 confident_black[273546]:     ],
Nov 29 05:43:41 compute-0 confident_black[273546]:     "1": [
Nov 29 05:43:41 compute-0 confident_black[273546]:         {
Nov 29 05:43:41 compute-0 confident_black[273546]:             "devices": [
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "/dev/loop4"
Nov 29 05:43:41 compute-0 confident_black[273546]:             ],
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_name": "ceph_lv1",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_size": "21470642176",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "name": "ceph_lv1",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "tags": {
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.cluster_name": "ceph",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.crush_device_class": "",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.encrypted": "0",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.osd_id": "1",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.type": "block",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.vdo": "0"
Nov 29 05:43:41 compute-0 confident_black[273546]:             },
Nov 29 05:43:41 compute-0 confident_black[273546]:             "type": "block",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "vg_name": "ceph_vg1"
Nov 29 05:43:41 compute-0 confident_black[273546]:         }
Nov 29 05:43:41 compute-0 confident_black[273546]:     ],
Nov 29 05:43:41 compute-0 confident_black[273546]:     "2": [
Nov 29 05:43:41 compute-0 confident_black[273546]:         {
Nov 29 05:43:41 compute-0 confident_black[273546]:             "devices": [
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "/dev/loop5"
Nov 29 05:43:41 compute-0 confident_black[273546]:             ],
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_name": "ceph_lv2",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_size": "21470642176",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "name": "ceph_lv2",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "tags": {
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.cluster_name": "ceph",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.crush_device_class": "",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.encrypted": "0",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.osd_id": "2",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.type": "block",
Nov 29 05:43:41 compute-0 confident_black[273546]:                 "ceph.vdo": "0"
Nov 29 05:43:41 compute-0 confident_black[273546]:             },
Nov 29 05:43:41 compute-0 confident_black[273546]:             "type": "block",
Nov 29 05:43:41 compute-0 confident_black[273546]:             "vg_name": "ceph_vg2"
Nov 29 05:43:41 compute-0 confident_black[273546]:         }
Nov 29 05:43:41 compute-0 confident_black[273546]:     ]
Nov 29 05:43:41 compute-0 confident_black[273546]: }
Nov 29 05:43:41 compute-0 systemd[1]: libpod-246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2.scope: Deactivated successfully.
Nov 29 05:43:41 compute-0 podman[273531]: 2025-11-29 05:43:41.161637361 +0000 UTC m=+0.900974066 container died 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 05:43:41 compute-0 ceph-mon[75176]: pgmap v1207: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s wr, 2 op/s
Nov 29 05:43:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ca905bf47cbe4acd8085d99c2bd211d210a41ca2a6803a8b070b41df422b720-merged.mount: Deactivated successfully.
Nov 29 05:43:41 compute-0 nova_compute[254898]: 2025-11-29 05:43:41.327 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:43:41 compute-0 nova_compute[254898]: 2025-11-29 05:43:41.328 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:43:41 compute-0 nova_compute[254898]: 2025-11-29 05:43:41.328 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:43:41 compute-0 nova_compute[254898]: 2025-11-29 05:43:41.329 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:43:41 compute-0 nova_compute[254898]: 2025-11-29 05:43:41.329 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:43:41 compute-0 podman[273531]: 2025-11-29 05:43:41.353718131 +0000 UTC m=+1.093054876 container remove 246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:43:41 compute-0 systemd[1]: libpod-conmon-246259674fc9dfdedf3d430318cbcad4c2c0540c4ad4356570a2ae0c9fe527f2.scope: Deactivated successfully.
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:43:41
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'images', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr']
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:43:41 compute-0 sudo[273426]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "format": "json"}]: dispatch
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:43:41 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:43:41.462+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a2d6206-4f8b-4475-a6b1-28b365cca976' of type subvolume
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a2d6206-4f8b-4475-a6b1-28b365cca976' of type subvolume
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7a2d6206-4f8b-4475-a6b1-28b365cca976'' moved to trashcan
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a2d6206-4f8b-4475-a6b1-28b365cca976, vol_name:cephfs) < ""
Nov 29 05:43:41 compute-0 sudo[273570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:43:41 compute-0 sudo[273570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:41 compute-0 sudo[273570]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:41 compute-0 sudo[273614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:43:41 compute-0 sudo[273614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:41 compute-0 sudo[273614]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:43:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:43:41 compute-0 sudo[273639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:43:41 compute-0 sudo[273639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:41 compute-0 sudo[273639]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:41 compute-0 sudo[273664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:43:41 compute-0 sudo[273664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:43:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2163980884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:43:41 compute-0 nova_compute[254898]: 2025-11-29 05:43:41.768 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:43:41 compute-0 nova_compute[254898]: 2025-11-29 05:43:41.921 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:43:41 compute-0 nova_compute[254898]: 2025-11-29 05:43:41.922 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5027MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:43:41 compute-0 nova_compute[254898]: 2025-11-29 05:43:41.922 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:43:41 compute-0 nova_compute[254898]: 2025-11-29 05:43:41.923 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:43:42 compute-0 podman[273732]: 2025-11-29 05:43:42.013347089 +0000 UTC m=+0.038154980 container create 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:43:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:43:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:43:42 compute-0 systemd[1]: Started libpod-conmon-30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5.scope.
Nov 29 05:43:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:43:42 compute-0 podman[273732]: 2025-11-29 05:43:41.995660342 +0000 UTC m=+0.020468273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:43:42 compute-0 podman[273732]: 2025-11-29 05:43:42.095534159 +0000 UTC m=+0.120342090 container init 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:43:42 compute-0 podman[273732]: 2025-11-29 05:43:42.101569135 +0000 UTC m=+0.126377016 container start 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 05:43:42 compute-0 podman[273732]: 2025-11-29 05:43:42.104420934 +0000 UTC m=+0.129228825 container attach 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:43:42 compute-0 stupefied_ellis[273748]: 167 167
Nov 29 05:43:42 compute-0 systemd[1]: libpod-30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5.scope: Deactivated successfully.
Nov 29 05:43:42 compute-0 podman[273732]: 2025-11-29 05:43:42.106909814 +0000 UTC m=+0.131717765 container died 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 05:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-acc0fe91d45d1b7bf11b29ddfc950997cdccec4b387ba31008c3c159385357dd-merged.mount: Deactivated successfully.
Nov 29 05:43:42 compute-0 podman[273732]: 2025-11-29 05:43:42.149176453 +0000 UTC m=+0.173984374 container remove 30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.151 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.152 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:43:42 compute-0 systemd[1]: libpod-conmon-30980928f6ef733660faab9d4d0971e5535145a1fd732931baee267bd4b379c5.scope: Deactivated successfully.
Nov 29 05:43:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 1 op/s
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.203 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing inventories for resource provider 59594bc8-0143-475b-913f-cbe106b48966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.277 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating ProviderTree inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.278 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 05:43:42 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "format": "json"}]: dispatch
Nov 29 05:43:42 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a2d6206-4f8b-4475-a6b1-28b365cca976", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:42 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2163980884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:43:42 compute-0 ceph-mon[75176]: pgmap v1208: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 1 op/s
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.295 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing aggregate associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 05:43:42 compute-0 podman[273772]: 2025-11-29 05:43:42.326484115 +0000 UTC m=+0.036126611 container create 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.328 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing trait associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.348 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:43:42 compute-0 systemd[1]: Started libpod-conmon-575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01.scope.
Nov 29 05:43:42 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ca71c4d28cad47e6d63c6d84e274270027afe259910626b2e0f78f05649fd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ca71c4d28cad47e6d63c6d84e274270027afe259910626b2e0f78f05649fd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ca71c4d28cad47e6d63c6d84e274270027afe259910626b2e0f78f05649fd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ca71c4d28cad47e6d63c6d84e274270027afe259910626b2e0f78f05649fd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:43:42 compute-0 podman[273772]: 2025-11-29 05:43:42.311681809 +0000 UTC m=+0.021324325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:43:42 compute-0 podman[273772]: 2025-11-29 05:43:42.419794175 +0000 UTC m=+0.129436681 container init 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:43:42 compute-0 podman[273772]: 2025-11-29 05:43:42.432164093 +0000 UTC m=+0.141806589 container start 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 05:43:42 compute-0 podman[273772]: 2025-11-29 05:43:42.435600705 +0000 UTC m=+0.145243231 container attach 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:43:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:43:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1624111269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.785 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.791 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.811 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.813 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.813 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 05:43:42 compute-0 nova_compute[254898]: 2025-11-29 05:43:42.967 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 05:43:43 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1624111269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:43:43 compute-0 focused_saha[273790]: {
Nov 29 05:43:43 compute-0 focused_saha[273790]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "osd_id": 0,
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "type": "bluestore"
Nov 29 05:43:43 compute-0 focused_saha[273790]:     },
Nov 29 05:43:43 compute-0 focused_saha[273790]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "osd_id": 1,
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "type": "bluestore"
Nov 29 05:43:43 compute-0 focused_saha[273790]:     },
Nov 29 05:43:43 compute-0 focused_saha[273790]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "osd_id": 2,
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:43:43 compute-0 focused_saha[273790]:         "type": "bluestore"
Nov 29 05:43:43 compute-0 focused_saha[273790]:     }
Nov 29 05:43:43 compute-0 focused_saha[273790]: }
Nov 29 05:43:43 compute-0 systemd[1]: libpod-575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01.scope: Deactivated successfully.
Nov 29 05:43:43 compute-0 conmon[273790]: conmon 575a3778ec15d7c0e45a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01.scope/container/memory.events
Nov 29 05:43:43 compute-0 podman[273772]: 2025-11-29 05:43:43.383871401 +0000 UTC m=+1.093513907 container died 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:43:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-62ca71c4d28cad47e6d63c6d84e274270027afe259910626b2e0f78f05649fd4-merged.mount: Deactivated successfully.
Nov 29 05:43:43 compute-0 podman[273772]: 2025-11-29 05:43:43.447289969 +0000 UTC m=+1.156932505 container remove 575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:43:43 compute-0 systemd[1]: libpod-conmon-575a3778ec15d7c0e45a133dafd386ae25e3484975489a8e168dcf43d3e74d01.scope: Deactivated successfully.
Nov 29 05:43:43 compute-0 sudo[273664]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:43:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:43:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:43:43 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:43:43 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 3eeaf550-96aa-4a98-baec-815f8e2584d0 does not exist
Nov 29 05:43:43 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 53b52b17-a785-4567-b98c-70f6c9cf4a31 does not exist
Nov 29 05:43:43 compute-0 sudo[273858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:43:43 compute-0 sudo[273858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:43 compute-0 sudo[273858]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:43 compute-0 sudo[273883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:43:43 compute-0 sudo[273883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:43:43 compute-0 sudo[273883]: pam_unix(sudo:session): session closed for user root
Nov 29 05:43:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 1 op/s
Nov 29 05:43:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:43:44 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:43:44 compute-0 ceph-mon[75176]: pgmap v1209: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s wr, 1 op/s
Nov 29 05:43:44 compute-0 nova_compute[254898]: 2025-11-29 05:43:44.961 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:44 compute-0 nova_compute[254898]: 2025-11-29 05:43:44.962 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:43:44 compute-0 nova_compute[254898]: 2025-11-29 05:43:44.962 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:43:44 compute-0 nova_compute[254898]: 2025-11-29 05:43:44.962 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:43:44 compute-0 nova_compute[254898]: 2025-11-29 05:43:44.983 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:43:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:43:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 05:43:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/19b69edf-a49a-4027-a0e5-36e1c4984bfa/.meta.tmp'
Nov 29 05:43:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/19b69edf-a49a-4027-a0e5-36e1c4984bfa/.meta.tmp' to config b'/volumes/_nogroup/19b69edf-a49a-4027-a0e5-36e1c4984bfa/.meta'
Nov 29 05:43:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 05:43:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "format": "json"}]: dispatch
Nov 29 05:43:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 05:43:45 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 05:43:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:43:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:43:45 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:43:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 2 op/s
Nov 29 05:43:46 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:43:46 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "format": "json"}]: dispatch
Nov 29 05:43:46 compute-0 ceph-mon[75176]: pgmap v1210: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 49 KiB/s wr, 2 op/s
Nov 29 05:43:47 compute-0 podman[273908]: 2025-11-29 05:43:47.031341331 +0000 UTC m=+0.086525577 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 05:43:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 2 op/s
Nov 29 05:43:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "format": "json"}]: dispatch
Nov 29 05:43:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:43:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:43:48 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:43:48.958+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '19b69edf-a49a-4027-a0e5-36e1c4984bfa' of type subvolume
Nov 29 05:43:48 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '19b69edf-a49a-4027-a0e5-36e1c4984bfa' of type subvolume
Nov 29 05:43:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 05:43:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/19b69edf-a49a-4027-a0e5-36e1c4984bfa'' moved to trashcan
Nov 29 05:43:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:43:48 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:19b69edf-a49a-4027-a0e5-36e1c4984bfa, vol_name:cephfs) < ""
Nov 29 05:43:49 compute-0 ceph-mon[75176]: pgmap v1211: 305 pgs: 305 active+clean; 72 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 40 KiB/s wr, 2 op/s
Nov 29 05:43:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 73 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 76 KiB/s wr, 3 op/s
Nov 29 05:43:50 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "format": "json"}]: dispatch
Nov 29 05:43:50 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "19b69edf-a49a-4027-a0e5-36e1c4984bfa", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:51 compute-0 ceph-mon[75176]: pgmap v1212: 305 pgs: 305 active+clean; 73 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 76 KiB/s wr, 3 op/s
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005100610674285342 of space, bias 4.0, pg target 0.6120732809142411 quantized to 16 (current 16)
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:43:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:43:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 73 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 2 op/s
Nov 29 05:43:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:43:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 05:43:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/16246e8b-77e5-4422-a8a4-1522b5502edf/.meta.tmp'
Nov 29 05:43:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/16246e8b-77e5-4422-a8a4-1522b5502edf/.meta.tmp' to config b'/volumes/_nogroup/16246e8b-77e5-4422-a8a4-1522b5502edf/.meta'
Nov 29 05:43:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 05:43:52 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "format": "json"}]: dispatch
Nov 29 05:43:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 05:43:52 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 05:43:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:43:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:43:53 compute-0 ceph-mon[75176]: pgmap v1213: 305 pgs: 305 active+clean; 73 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 2 op/s
Nov 29 05:43:53 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:43:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 73 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 2 op/s
Nov 29 05:43:54 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:43:54 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "format": "json"}]: dispatch
Nov 29 05:43:55 compute-0 ceph-mon[75176]: pgmap v1214: 305 pgs: 305 active+clean; 73 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 50 KiB/s wr, 2 op/s
Nov 29 05:43:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "format": "json"}]: dispatch
Nov 29 05:43:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:16246e8b-77e5-4422-a8a4-1522b5502edf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:43:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:16246e8b-77e5-4422-a8a4-1522b5502edf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:43:55 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:43:55.892+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '16246e8b-77e5-4422-a8a4-1522b5502edf' of type subvolume
Nov 29 05:43:55 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '16246e8b-77e5-4422-a8a4-1522b5502edf' of type subvolume
Nov 29 05:43:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 05:43:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/16246e8b-77e5-4422-a8a4-1522b5502edf'' moved to trashcan
Nov 29 05:43:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:43:55 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:16246e8b-77e5-4422-a8a4-1522b5502edf, vol_name:cephfs) < ""
Nov 29 05:43:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:43:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 68 KiB/s wr, 4 op/s
Nov 29 05:43:56 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "format": "json"}]: dispatch
Nov 29 05:43:56 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "16246e8b-77e5-4422-a8a4-1522b5502edf", "force": true, "format": "json"}]: dispatch
Nov 29 05:43:56 compute-0 ceph-mon[75176]: pgmap v1215: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 68 KiB/s wr, 4 op/s
Nov 29 05:43:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 54 KiB/s wr, 2 op/s
Nov 29 05:43:59 compute-0 ceph-mon[75176]: pgmap v1216: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 54 KiB/s wr, 2 op/s
Nov 29 05:43:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:43:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 05:43:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fa3cb891-f31e-45d1-aaa6-1610fdda8845/.meta.tmp'
Nov 29 05:43:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fa3cb891-f31e-45d1-aaa6-1610fdda8845/.meta.tmp' to config b'/volumes/_nogroup/fa3cb891-f31e-45d1-aaa6-1610fdda8845/.meta'
Nov 29 05:43:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 05:43:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "format": "json"}]: dispatch
Nov 29 05:43:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 05:43:59 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 05:43:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:43:59 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:44:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 4 op/s
Nov 29 05:44:00 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:44:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:44:01 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "format": "json"}]: dispatch
Nov 29 05:44:01 compute-0 ceph-mon[75176]: pgmap v1217: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 91 KiB/s wr, 4 op/s
Nov 29 05:44:02 compute-0 podman[273928]: 2025-11-29 05:44:02.018339113 +0000 UTC m=+0.068452129 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:44:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Nov 29 05:44:03 compute-0 ceph-mon[75176]: pgmap v1218: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Nov 29 05:44:03 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "format": "json"}]: dispatch
Nov 29 05:44:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:44:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:44:03 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:44:03.371+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fa3cb891-f31e-45d1-aaa6-1610fdda8845' of type subvolume
Nov 29 05:44:03 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fa3cb891-f31e-45d1-aaa6-1610fdda8845' of type subvolume
Nov 29 05:44:03 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "force": true, "format": "json"}]: dispatch
Nov 29 05:44:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 05:44:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fa3cb891-f31e-45d1-aaa6-1610fdda8845'' moved to trashcan
Nov 29 05:44:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:44:03 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fa3cb891-f31e-45d1-aaa6-1610fdda8845, vol_name:cephfs) < ""
Nov 29 05:44:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Nov 29 05:44:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "format": "json"}]: dispatch
Nov 29 05:44:04 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fa3cb891-f31e-45d1-aaa6-1610fdda8845", "force": true, "format": "json"}]: dispatch
Nov 29 05:44:04 compute-0 ceph-mon[75176]: pgmap v1219: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Nov 29 05:44:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:06 compute-0 podman[273948]: 2025-11-29 05:44:06.072836041 +0000 UTC m=+0.116638215 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 05:44:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 74 KiB/s wr, 4 op/s
Nov 29 05:44:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:44:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 05:44:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/479a8e74-0da9-4e81-a8a6-b7eb56d43c48/.meta.tmp'
Nov 29 05:44:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/479a8e74-0da9-4e81-a8a6-b7eb56d43c48/.meta.tmp' to config b'/volumes/_nogroup/479a8e74-0da9-4e81-a8a6-b7eb56d43c48/.meta'
Nov 29 05:44:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 05:44:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "format": "json"}]: dispatch
Nov 29 05:44:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 05:44:06 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 05:44:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 05:44:06 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:44:07 compute-0 ceph-mon[75176]: pgmap v1220: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 74 KiB/s wr, 4 op/s
Nov 29 05:44:07 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/548060573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 05:44:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 56 KiB/s wr, 2 op/s
Nov 29 05:44:08 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Nov 29 05:44:08 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "format": "json"}]: dispatch
Nov 29 05:44:09 compute-0 ceph-mon[75176]: pgmap v1221: 305 pgs: 305 active+clean; 73 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 56 KiB/s wr, 2 op/s
Nov 29 05:44:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 83 KiB/s wr, 4 op/s
Nov 29 05:44:10 compute-0 ceph-mon[75176]: pgmap v1222: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 83 KiB/s wr, 4 op/s
Nov 29 05:44:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "format": "json"}]: dispatch
Nov 29 05:44:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:44:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:44:10 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:44:10.705+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '479a8e74-0da9-4e81-a8a6-b7eb56d43c48' of type subvolume
Nov 29 05:44:10 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '479a8e74-0da9-4e81-a8a6-b7eb56d43c48' of type subvolume
Nov 29 05:44:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "force": true, "format": "json"}]: dispatch
Nov 29 05:44:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 05:44:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/479a8e74-0da9-4e81-a8a6-b7eb56d43c48'' moved to trashcan
Nov 29 05:44:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:44:10 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:479a8e74-0da9-4e81-a8a6-b7eb56d43c48, vol_name:cephfs) < ""
Nov 29 05:44:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:44:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:44:11 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "format": "json"}]: dispatch
Nov 29 05:44:11 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "479a8e74-0da9-4e81-a8a6-b7eb56d43c48", "force": true, "format": "json"}]: dispatch
Nov 29 05:44:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:44:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:44:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:44:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:44:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s
Nov 29 05:44:12 compute-0 ceph-mon[75176]: pgmap v1223: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s
Nov 29 05:44:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:44:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:44:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:44:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:44:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:44:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:44:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s
Nov 29 05:44:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:44:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/904247315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:44:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:44:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/904247315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:44:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/904247315' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:44:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/904247315' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:44:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "snap_name": "521373cc-7b10-441e-9ad4-a9f2f13df341_ffd2f7d1-d553-4c4c-ad2a-79c702a633bc", "force": true, "format": "json"}]: dispatch
Nov 29 05:44:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341_ffd2f7d1-d553-4c4c-ad2a-79c702a633bc, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:44:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp'
Nov 29 05:44:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp' to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta'
Nov 29 05:44:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341_ffd2f7d1-d553-4c4c-ad2a-79c702a633bc, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:44:14 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "snap_name": "521373cc-7b10-441e-9ad4-a9f2f13df341", "force": true, "format": "json"}]: dispatch
Nov 29 05:44:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:44:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp'
Nov 29 05:44:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta.tmp' to config b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27/.meta'
Nov 29 05:44:14 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:521373cc-7b10-441e-9ad4-a9f2f13df341, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:44:15 compute-0 ceph-mon[75176]: pgmap v1224: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s
Nov 29 05:44:15 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "snap_name": "521373cc-7b10-441e-9ad4-a9f2f13df341_ffd2f7d1-d553-4c4c-ad2a-79c702a633bc", "force": true, "format": "json"}]: dispatch
Nov 29 05:44:15 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "snap_name": "521373cc-7b10-441e-9ad4-a9f2f13df341", "force": true, "format": "json"}]: dispatch
Nov 29 05:44:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 78 KiB/s wr, 4 op/s
Nov 29 05:44:16 compute-0 ceph-mon[75176]: pgmap v1225: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 78 KiB/s wr, 4 op/s
Nov 29 05:44:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "format": "json"}]: dispatch
Nov 29 05:44:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:44:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Nov 29 05:44:17 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:44:17.932+0000 7fa4c75e5640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '212fdd6d-2482-42c2-82e5-a1ecfd70ce27' of type subvolume
Nov 29 05:44:17 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '212fdd6d-2482-42c2-82e5-a1ecfd70ce27' of type subvolume
Nov 29 05:44:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "force": true, "format": "json"}]: dispatch
Nov 29 05:44:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:44:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/212fdd6d-2482-42c2-82e5-a1ecfd70ce27'' moved to trashcan
Nov 29 05:44:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 29 05:44:17 compute-0 ceph-mgr[75473]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:212fdd6d-2482-42c2-82e5-a1ecfd70ce27, vol_name:cephfs) < ""
Nov 29 05:44:18 compute-0 podman[273974]: 2025-11-29 05:44:18.01916002 +0000 UTC m=+0.072937855 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 05:44:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 59 KiB/s wr, 3 op/s
Nov 29 05:44:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "format": "json"}]: dispatch
Nov 29 05:44:19 compute-0 ceph-mon[75176]: from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "212fdd6d-2482-42c2-82e5-a1ecfd70ce27", "force": true, "format": "json"}]: dispatch
Nov 29 05:44:19 compute-0 ceph-mon[75176]: pgmap v1226: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 59 KiB/s wr, 3 op/s
Nov 29 05:44:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 82 KiB/s wr, 4 op/s
Nov 29 05:44:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:21 compute-0 ceph-mon[75176]: pgmap v1227: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 82 KiB/s wr, 4 op/s
Nov 29 05:44:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Nov 29 05:44:22 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:44:22.627 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:44:22 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:44:22.629 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 05:44:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 29 05:44:23 compute-0 ceph-mon[75176]: pgmap v1228: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 55 KiB/s wr, 3 op/s
Nov 29 05:44:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 29 05:44:23 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 29 05:44:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 67 KiB/s wr, 3 op/s
Nov 29 05:44:24 compute-0 ceph-mon[75176]: osdmap e168: 3 total, 3 up, 3 in
Nov 29 05:44:24 compute-0 ceph-mon[75176]: pgmap v1230: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 67 KiB/s wr, 3 op/s
Nov 29 05:44:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 2 op/s
Nov 29 05:44:27 compute-0 ceph-mon[75176]: pgmap v1231: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 2 op/s
Nov 29 05:44:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 2 op/s
Nov 29 05:44:29 compute-0 ceph-mon[75176]: pgmap v1232: 305 pgs: 305 active+clean; 74 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 409 B/s rd, 37 KiB/s wr, 2 op/s
Nov 29 05:44:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 25 KiB/s wr, 1 op/s
Nov 29 05:44:30 compute-0 ceph-mon[75176]: pgmap v1233: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 25 KiB/s wr, 1 op/s
Nov 29 05:44:30 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:44:30.631 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:44:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 29 05:44:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 29 05:44:30 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 29 05:44:31 compute-0 ceph-mon[75176]: osdmap e169: 3 total, 3 up, 3 in
Nov 29 05:44:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 229 B/s rd, 28 KiB/s wr, 1 op/s
Nov 29 05:44:32 compute-0 ceph-mon[75176]: pgmap v1235: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 229 B/s rd, 28 KiB/s wr, 1 op/s
Nov 29 05:44:33 compute-0 podman[273994]: 2025-11-29 05:44:33.015572153 +0000 UTC m=+0.074119164 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 05:44:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 25 KiB/s wr, 1 op/s
Nov 29 05:44:35 compute-0 ceph-mon[75176]: pgmap v1236: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 25 KiB/s wr, 1 op/s
Nov 29 05:44:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Nov 29 05:44:37 compute-0 podman[274014]: 2025-11-29 05:44:37.058972819 +0000 UTC m=+0.100345118 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Nov 29 05:44:37 compute-0 ceph-mon[75176]: pgmap v1237: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Nov 29 05:44:37 compute-0 nova_compute[254898]: 2025-11-29 05:44:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:44:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Nov 29 05:44:39 compute-0 ceph-mon[75176]: pgmap v1238: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Nov 29 05:44:39 compute-0 nova_compute[254898]: 2025-11-29 05:44:39.949 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:44:39 compute-0 nova_compute[254898]: 2025-11-29 05:44:39.964 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:44:39 compute-0 nova_compute[254898]: 2025-11-29 05:44:39.964 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:44:39 compute-0 nova_compute[254898]: 2025-11-29 05:44:39.964 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:44:39 compute-0 nova_compute[254898]: 2025-11-29 05:44:39.964 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:44:39 compute-0 nova_compute[254898]: 2025-11-29 05:44:39.965 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:44:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s wr, 0 op/s
Nov 29 05:44:40 compute-0 ceph-mon[75176]: pgmap v1239: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s wr, 0 op/s
Nov 29 05:44:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:40 compute-0 nova_compute[254898]: 2025-11-29 05:44:40.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:44:41
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images', 'default.rgw.log', 'default.rgw.control', '.mgr', 'vms']
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:44:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:44:41 compute-0 nova_compute[254898]: 2025-11-29 05:44:41.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:44:41 compute-0 nova_compute[254898]: 2025-11-29 05:44:41.977 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:44:41 compute-0 nova_compute[254898]: 2025-11-29 05:44:41.977 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:44:41 compute-0 nova_compute[254898]: 2025-11-29 05:44:41.977 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:44:41 compute-0 nova_compute[254898]: 2025-11-29 05:44:41.978 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:44:41 compute-0 nova_compute[254898]: 2025-11-29 05:44:41.978 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:44:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:44:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:44:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Nov 29 05:44:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:44:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3864051765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:44:42 compute-0 nova_compute[254898]: 2025-11-29 05:44:42.434 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:44:42 compute-0 nova_compute[254898]: 2025-11-29 05:44:42.614 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:44:42 compute-0 nova_compute[254898]: 2025-11-29 05:44:42.615 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5069MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:44:42 compute-0 nova_compute[254898]: 2025-11-29 05:44:42.615 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:44:42 compute-0 nova_compute[254898]: 2025-11-29 05:44:42.615 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:44:42 compute-0 nova_compute[254898]: 2025-11-29 05:44:42.667 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:44:42 compute-0 nova_compute[254898]: 2025-11-29 05:44:42.667 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:44:42 compute-0 nova_compute[254898]: 2025-11-29 05:44:42.689 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:44:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:44:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468906332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:44:43 compute-0 nova_compute[254898]: 2025-11-29 05:44:43.101 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:44:43 compute-0 nova_compute[254898]: 2025-11-29 05:44:43.106 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:44:43 compute-0 nova_compute[254898]: 2025-11-29 05:44:43.140 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:44:43 compute-0 nova_compute[254898]: 2025-11-29 05:44:43.142 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:44:43 compute-0 nova_compute[254898]: 2025-11-29 05:44:43.143 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:44:43 compute-0 ceph-mon[75176]: pgmap v1240: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Nov 29 05:44:43 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3864051765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:44:43 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1468906332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:44:43 compute-0 sudo[274085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:44:43 compute-0 sudo[274085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:43 compute-0 sudo[274085]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:43 compute-0 sudo[274110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:44:43 compute-0 sudo[274110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:43 compute-0 sudo[274110]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:43 compute-0 sudo[274135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:44:43 compute-0 sudo[274135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:43 compute-0 sudo[274135]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:43 compute-0 sudo[274160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:44:43 compute-0 sudo[274160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 05:44:44 compute-0 sudo[274160]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:44:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:44:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:44:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:44:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:44:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:44:44 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev bcd232e2-308d-4405-86f3-1fd63d39b039 does not exist
Nov 29 05:44:44 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 42af8ed8-a182-4598-add6-a86a41099650 does not exist
Nov 29 05:44:44 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ce428be3-8b6d-40f4-b401-fc56cbfbfba4 does not exist
Nov 29 05:44:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:44:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:44:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:44:44 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:44:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:44:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:44:44 compute-0 sudo[274216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:44:44 compute-0 sudo[274216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:44 compute-0 sudo[274216]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:44 compute-0 sudo[274241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:44:44 compute-0 sudo[274241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:44 compute-0 sudo[274241]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:44 compute-0 sudo[274266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:44:44 compute-0 sudo[274266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:44 compute-0 sudo[274266]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:44 compute-0 sudo[274291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:44:44 compute-0 sudo[274291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:45 compute-0 podman[274357]: 2025-11-29 05:44:45.076124752 +0000 UTC m=+0.041580671 container create 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:44:45 compute-0 systemd[1]: Started libpod-conmon-08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f.scope.
Nov 29 05:44:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:44:45 compute-0 podman[274357]: 2025-11-29 05:44:45.056780591 +0000 UTC m=+0.022236600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:44:45 compute-0 podman[274357]: 2025-11-29 05:44:45.155873339 +0000 UTC m=+0.121329308 container init 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:44:45 compute-0 podman[274357]: 2025-11-29 05:44:45.167562616 +0000 UTC m=+0.133018545 container start 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:44:45 compute-0 podman[274357]: 2025-11-29 05:44:45.170905586 +0000 UTC m=+0.136361525 container attach 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:44:45 compute-0 affectionate_greider[274374]: 167 167
Nov 29 05:44:45 compute-0 systemd[1]: libpod-08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f.scope: Deactivated successfully.
Nov 29 05:44:45 compute-0 conmon[274374]: conmon 08dfa279b4c94370a0c5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f.scope/container/memory.events
Nov 29 05:44:45 compute-0 podman[274357]: 2025-11-29 05:44:45.174651165 +0000 UTC m=+0.140107084 container died 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 05:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-185446150ef33fffcbe69642daa2eb7876338c49cad3be33257b99f4d3802ae0-merged.mount: Deactivated successfully.
Nov 29 05:44:45 compute-0 podman[274357]: 2025-11-29 05:44:45.222840832 +0000 UTC m=+0.188296751 container remove 08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:44:45 compute-0 systemd[1]: libpod-conmon-08dfa279b4c94370a0c57bbeffb397132a9a8ae47736b5b8b8440f8a22ab4b4f.scope: Deactivated successfully.
Nov 29 05:44:45 compute-0 ceph-mon[75176]: pgmap v1241: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 05:44:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:44:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:44:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:44:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:44:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:44:45 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:44:45 compute-0 podman[274398]: 2025-11-29 05:44:45.389151048 +0000 UTC m=+0.056368993 container create d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 05:44:45 compute-0 systemd[1]: Started libpod-conmon-d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82.scope.
Nov 29 05:44:45 compute-0 podman[274398]: 2025-11-29 05:44:45.359569254 +0000 UTC m=+0.026787299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:44:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:45 compute-0 podman[274398]: 2025-11-29 05:44:45.486506933 +0000 UTC m=+0.153724938 container init d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:44:45 compute-0 podman[274398]: 2025-11-29 05:44:45.49943931 +0000 UTC m=+0.166657255 container start d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:44:45 compute-0 podman[274398]: 2025-11-29 05:44:45.50278141 +0000 UTC m=+0.169999405 container attach d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:44:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 05:44:46 compute-0 ceph-mon[75176]: pgmap v1242: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 05:44:46 compute-0 strange_bardeen[274414]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:44:46 compute-0 strange_bardeen[274414]: --> relative data size: 1.0
Nov 29 05:44:46 compute-0 strange_bardeen[274414]: --> All data devices are unavailable
Nov 29 05:44:46 compute-0 systemd[1]: libpod-d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82.scope: Deactivated successfully.
Nov 29 05:44:46 compute-0 conmon[274414]: conmon d4df85fae5f7d74c4a1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82.scope/container/memory.events
Nov 29 05:44:46 compute-0 podman[274398]: 2025-11-29 05:44:46.511871522 +0000 UTC m=+1.179089467 container died d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:44:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c5d006daed35e1f49c86c39ccaac6cba641c4de3924457c5bbd11e89499dc8f-merged.mount: Deactivated successfully.
Nov 29 05:44:46 compute-0 podman[274398]: 2025-11-29 05:44:46.567316641 +0000 UTC m=+1.234534586 container remove d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bardeen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 05:44:46 compute-0 systemd[1]: libpod-conmon-d4df85fae5f7d74c4a1cb6074243c81ca4ae7c025eae324147c6a6d5d9123d82.scope: Deactivated successfully.
Nov 29 05:44:46 compute-0 sudo[274291]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:46 compute-0 sudo[274458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:44:46 compute-0 sudo[274458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:46 compute-0 sudo[274458]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:46 compute-0 sudo[274483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:44:46 compute-0 sudo[274483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:46 compute-0 sudo[274483]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:46 compute-0 sudo[274508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:44:46 compute-0 sudo[274508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:46 compute-0 sudo[274508]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:46 compute-0 sudo[274533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:44:46 compute-0 sudo[274533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:47 compute-0 podman[274599]: 2025-11-29 05:44:47.207464467 +0000 UTC m=+0.054820315 container create fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:44:47 compute-0 systemd[1]: Started libpod-conmon-fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629.scope.
Nov 29 05:44:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:44:47 compute-0 podman[274599]: 2025-11-29 05:44:47.184542452 +0000 UTC m=+0.031898360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:44:47 compute-0 podman[274599]: 2025-11-29 05:44:47.287232334 +0000 UTC m=+0.134588202 container init fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 05:44:47 compute-0 podman[274599]: 2025-11-29 05:44:47.29377514 +0000 UTC m=+0.141130988 container start fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:44:47 compute-0 podman[274599]: 2025-11-29 05:44:47.297307445 +0000 UTC m=+0.144663323 container attach fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:44:47 compute-0 youthful_perlman[274616]: 167 167
Nov 29 05:44:47 compute-0 systemd[1]: libpod-fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629.scope: Deactivated successfully.
Nov 29 05:44:47 compute-0 podman[274599]: 2025-11-29 05:44:47.299701811 +0000 UTC m=+0.147057649 container died fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8774ef51880a6cb5097baf83b07c13931c115620f52f55433752315f0c4fed9-merged.mount: Deactivated successfully.
Nov 29 05:44:47 compute-0 podman[274599]: 2025-11-29 05:44:47.333562676 +0000 UTC m=+0.180918504 container remove fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:44:47 compute-0 systemd[1]: libpod-conmon-fcca16c7237b1f1e1261f9c4da8ad654a0daae98ff59c24d0061f2b623e30629.scope: Deactivated successfully.
Nov 29 05:44:47 compute-0 podman[274641]: 2025-11-29 05:44:47.504752219 +0000 UTC m=+0.051997648 container create 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 05:44:47 compute-0 systemd[1]: Started libpod-conmon-955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff.scope.
Nov 29 05:44:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ce259d095a691b129e6f486a4d8a0340cee9eb9f8298db384f8c5b6c179896/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ce259d095a691b129e6f486a4d8a0340cee9eb9f8298db384f8c5b6c179896/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ce259d095a691b129e6f486a4d8a0340cee9eb9f8298db384f8c5b6c179896/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ce259d095a691b129e6f486a4d8a0340cee9eb9f8298db384f8c5b6c179896/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:47 compute-0 podman[274641]: 2025-11-29 05:44:47.573101255 +0000 UTC m=+0.120346704 container init 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:44:47 compute-0 podman[274641]: 2025-11-29 05:44:47.48887832 +0000 UTC m=+0.036123769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:44:47 compute-0 podman[274641]: 2025-11-29 05:44:47.588681015 +0000 UTC m=+0.135926484 container start 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:44:47 compute-0 podman[274641]: 2025-11-29 05:44:47.593061569 +0000 UTC m=+0.140307018 container attach 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 05:44:48 compute-0 nova_compute[254898]: 2025-11-29 05:44:48.139 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:44:48 compute-0 nova_compute[254898]: 2025-11-29 05:44:48.141 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:44:48 compute-0 nova_compute[254898]: 2025-11-29 05:44:48.141 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:44:48 compute-0 nova_compute[254898]: 2025-11-29 05:44:48.141 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:44:48 compute-0 nova_compute[254898]: 2025-11-29 05:44:48.165 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:44:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 05:44:48 compute-0 musing_banzai[274658]: {
Nov 29 05:44:48 compute-0 musing_banzai[274658]:     "0": [
Nov 29 05:44:48 compute-0 musing_banzai[274658]:         {
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "devices": [
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "/dev/loop3"
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             ],
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_name": "ceph_lv0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_size": "21470642176",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "name": "ceph_lv0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "tags": {
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.cluster_name": "ceph",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.crush_device_class": "",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.encrypted": "0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.osd_id": "0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.type": "block",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.vdo": "0"
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             },
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "type": "block",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "vg_name": "ceph_vg0"
Nov 29 05:44:48 compute-0 musing_banzai[274658]:         }
Nov 29 05:44:48 compute-0 musing_banzai[274658]:     ],
Nov 29 05:44:48 compute-0 musing_banzai[274658]:     "1": [
Nov 29 05:44:48 compute-0 musing_banzai[274658]:         {
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "devices": [
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "/dev/loop4"
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             ],
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_name": "ceph_lv1",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_size": "21470642176",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "name": "ceph_lv1",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "tags": {
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.cluster_name": "ceph",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.crush_device_class": "",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.encrypted": "0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.osd_id": "1",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.type": "block",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.vdo": "0"
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             },
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "type": "block",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "vg_name": "ceph_vg1"
Nov 29 05:44:48 compute-0 musing_banzai[274658]:         }
Nov 29 05:44:48 compute-0 musing_banzai[274658]:     ],
Nov 29 05:44:48 compute-0 musing_banzai[274658]:     "2": [
Nov 29 05:44:48 compute-0 musing_banzai[274658]:         {
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "devices": [
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "/dev/loop5"
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             ],
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_name": "ceph_lv2",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_size": "21470642176",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "name": "ceph_lv2",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "tags": {
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.cluster_name": "ceph",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.crush_device_class": "",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.encrypted": "0",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.osd_id": "2",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.type": "block",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:                 "ceph.vdo": "0"
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             },
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "type": "block",
Nov 29 05:44:48 compute-0 musing_banzai[274658]:             "vg_name": "ceph_vg2"
Nov 29 05:44:48 compute-0 musing_banzai[274658]:         }
Nov 29 05:44:48 compute-0 musing_banzai[274658]:     ]
Nov 29 05:44:48 compute-0 musing_banzai[274658]: }
Nov 29 05:44:48 compute-0 systemd[1]: libpod-955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff.scope: Deactivated successfully.
Nov 29 05:44:48 compute-0 podman[274641]: 2025-11-29 05:44:48.356240872 +0000 UTC m=+0.903486301 container died 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-03ce259d095a691b129e6f486a4d8a0340cee9eb9f8298db384f8c5b6c179896-merged.mount: Deactivated successfully.
Nov 29 05:44:48 compute-0 podman[274641]: 2025-11-29 05:44:48.419130608 +0000 UTC m=+0.966376047 container remove 955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 29 05:44:48 compute-0 systemd[1]: libpod-conmon-955038d00bae5f4aa736efda597ae83acd1eb9edf19f9db4dfe4dc53f4cf79ff.scope: Deactivated successfully.
Nov 29 05:44:48 compute-0 sudo[274533]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:48 compute-0 podman[274668]: 2025-11-29 05:44:48.48060519 +0000 UTC m=+0.074243397 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:44:48 compute-0 sudo[274693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:44:48 compute-0 sudo[274693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:48 compute-0 sudo[274693]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:48 compute-0 sudo[274722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:44:48 compute-0 sudo[274722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:48 compute-0 sudo[274722]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:48 compute-0 sudo[274747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:44:48 compute-0 sudo[274747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:48 compute-0 sudo[274747]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:48 compute-0 sudo[274772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:44:48 compute-0 sudo[274772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:49 compute-0 podman[274837]: 2025-11-29 05:44:49.013948976 +0000 UTC m=+0.038189340 container create 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:44:49 compute-0 systemd[1]: Started libpod-conmon-944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32.scope.
Nov 29 05:44:49 compute-0 podman[274837]: 2025-11-29 05:44:48.995552788 +0000 UTC m=+0.019793192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:44:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:44:49 compute-0 podman[274837]: 2025-11-29 05:44:49.12217603 +0000 UTC m=+0.146416484 container init 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:44:49 compute-0 podman[274837]: 2025-11-29 05:44:49.130324684 +0000 UTC m=+0.154565048 container start 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:44:49 compute-0 podman[274837]: 2025-11-29 05:44:49.133662613 +0000 UTC m=+0.157903007 container attach 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 29 05:44:49 compute-0 kind_banzai[274853]: 167 167
Nov 29 05:44:49 compute-0 systemd[1]: libpod-944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32.scope: Deactivated successfully.
Nov 29 05:44:49 compute-0 conmon[274853]: conmon 944d40f710cdda4d6108 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32.scope/container/memory.events
Nov 29 05:44:49 compute-0 podman[274837]: 2025-11-29 05:44:49.136695345 +0000 UTC m=+0.160935749 container died 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:44:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f005368ff4bc0b921e5a693a4d1f9564e6ec069dd89b4f814e819a2ec646a774-merged.mount: Deactivated successfully.
Nov 29 05:44:49 compute-0 podman[274837]: 2025-11-29 05:44:49.181502991 +0000 UTC m=+0.205743395 container remove 944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_banzai, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 05:44:49 compute-0 systemd[1]: libpod-conmon-944d40f710cdda4d6108f9a159da50d464fd31bca022289049f6a3d717e4cf32.scope: Deactivated successfully.
Nov 29 05:44:49 compute-0 ceph-mon[75176]: pgmap v1243: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 05:44:49 compute-0 podman[274877]: 2025-11-29 05:44:49.366342547 +0000 UTC m=+0.040687708 container create 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:44:49 compute-0 systemd[1]: Started libpod-conmon-44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd.scope.
Nov 29 05:44:49 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263180e3a4c3b24b8d109f6a4e2e5e506abf82a3ac24a2e7ac14e805edcbacff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263180e3a4c3b24b8d109f6a4e2e5e506abf82a3ac24a2e7ac14e805edcbacff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263180e3a4c3b24b8d109f6a4e2e5e506abf82a3ac24a2e7ac14e805edcbacff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263180e3a4c3b24b8d109f6a4e2e5e506abf82a3ac24a2e7ac14e805edcbacff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:44:49 compute-0 podman[274877]: 2025-11-29 05:44:49.435961813 +0000 UTC m=+0.110306984 container init 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:44:49 compute-0 podman[274877]: 2025-11-29 05:44:49.349970029 +0000 UTC m=+0.024315240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:44:49 compute-0 podman[274877]: 2025-11-29 05:44:49.446037273 +0000 UTC m=+0.120382434 container start 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:44:49 compute-0 podman[274877]: 2025-11-29 05:44:49.448759318 +0000 UTC m=+0.123104499 container attach 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:44:49 compute-0 sshd-session[274670]: Received disconnect from 45.120.216.232 port 57046:11: Bye Bye [preauth]
Nov 29 05:44:49 compute-0 sshd-session[274670]: Disconnected from authenticating user root 45.120.216.232 port 57046 [preauth]
Nov 29 05:44:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 05:44:50 compute-0 modest_merkle[274894]: {
Nov 29 05:44:50 compute-0 modest_merkle[274894]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "osd_id": 0,
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "type": "bluestore"
Nov 29 05:44:50 compute-0 modest_merkle[274894]:     },
Nov 29 05:44:50 compute-0 modest_merkle[274894]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "osd_id": 1,
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "type": "bluestore"
Nov 29 05:44:50 compute-0 modest_merkle[274894]:     },
Nov 29 05:44:50 compute-0 modest_merkle[274894]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "osd_id": 2,
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:44:50 compute-0 modest_merkle[274894]:         "type": "bluestore"
Nov 29 05:44:50 compute-0 modest_merkle[274894]:     }
Nov 29 05:44:50 compute-0 modest_merkle[274894]: }
Nov 29 05:44:50 compute-0 systemd[1]: libpod-44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd.scope: Deactivated successfully.
Nov 29 05:44:50 compute-0 podman[274877]: 2025-11-29 05:44:50.338776648 +0000 UTC m=+1.013121809 container died 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:44:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-263180e3a4c3b24b8d109f6a4e2e5e506abf82a3ac24a2e7ac14e805edcbacff-merged.mount: Deactivated successfully.
Nov 29 05:44:50 compute-0 podman[274877]: 2025-11-29 05:44:50.392427434 +0000 UTC m=+1.066772605 container remove 44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:44:50 compute-0 systemd[1]: libpod-conmon-44a2dbb96d6d4f0e55f7b477b9fa186c0994426e44a0cc04855c9bfa036d42dd.scope: Deactivated successfully.
Nov 29 05:44:50 compute-0 sudo[274772]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:44:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:44:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:44:50 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:44:50 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ef25af65-0e7c-493a-bd9c-dd645fc30d5a does not exist
Nov 29 05:44:50 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev f38a7d0c-6751-4ece-89f8-070dd9c068cd does not exist
Nov 29 05:44:50 compute-0 sudo[274941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:44:50 compute-0 sudo[274941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:50 compute-0 sudo[274941]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:50 compute-0 sudo[274966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:44:50 compute-0 sudo[274966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:44:50 compute-0 sudo[274966]: pam_unix(sudo:session): session closed for user root
Nov 29 05:44:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:51 compute-0 ceph-mon[75176]: pgmap v1244: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Nov 29 05:44:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:44:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:44:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:44:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:44:52 compute-0 ceph-mon[75176]: pgmap v1245: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:44:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:44:55 compute-0 ceph-mon[75176]: pgmap v1246: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:44:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:44:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:44:57 compute-0 ceph-mon[75176]: pgmap v1247: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:44:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:44:59 compute-0 ceph-mon[75176]: pgmap v1248: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:01 compute-0 ceph-mon[75176]: pgmap v1249: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:03 compute-0 ceph-mon[75176]: pgmap v1250: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:04 compute-0 podman[274991]: 2025-11-29 05:45:04.020254961 +0000 UTC m=+0.059646730 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 05:45:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:04 compute-0 ceph-mon[75176]: pgmap v1251: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:07 compute-0 ceph-mon[75176]: pgmap v1252: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:08 compute-0 podman[275012]: 2025-11-29 05:45:08.126805897 +0000 UTC m=+0.168160121 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:45:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:08 compute-0 ceph-mon[75176]: pgmap v1253: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:11 compute-0 ceph-mon[75176]: pgmap v1254: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:45:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:45:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:45:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:45:11 compute-0 sshd-session[275040]: Received disconnect from 152.32.145.111 port 42706:11: Bye Bye [preauth]
Nov 29 05:45:11 compute-0 sshd-session[275040]: Disconnected from authenticating user root 152.32.145.111 port 42706 [preauth]
Nov 29 05:45:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:45:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:45:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:12 compute-0 sshd-session[275042]: Accepted publickey for zuul from 192.168.122.10 port 38330 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:45:12 compute-0 systemd-logind[793]: New session 51 of user zuul.
Nov 29 05:45:12 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 29 05:45:12 compute-0 sshd-session[275042]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:45:12 compute-0 sudo[275046]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 29 05:45:12 compute-0 sudo[275046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:45:13 compute-0 ceph-mon[75176]: pgmap v1255: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:45:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:45:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:45:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:45:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:45:13.761 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:45:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:45:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3802455169' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:45:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:45:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3802455169' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:45:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3802455169' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:45:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3802455169' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:45:15 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14515 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:15 compute-0 ceph-mon[75176]: pgmap v1256: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:15 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14517 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 05:45:16 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344424986' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.304038) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116304069, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2383, "num_deletes": 505, "total_data_size": 3399465, "memory_usage": 3449168, "flush_reason": "Manual Compaction"}
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 29 05:45:16 compute-0 ceph-mon[75176]: from='client.14515 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:16 compute-0 ceph-mon[75176]: from='client.14517 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:16 compute-0 ceph-mon[75176]: pgmap v1257: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116321444, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3105267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26246, "largest_seqno": 28628, "table_properties": {"data_size": 3095353, "index_size": 5704, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 25291, "raw_average_key_size": 20, "raw_value_size": 3072861, "raw_average_value_size": 2476, "num_data_blocks": 252, "num_entries": 1241, "num_filter_entries": 1241, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764394907, "oldest_key_time": 1764394907, "file_creation_time": 1764395116, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 17458 microseconds, and 8436 cpu microseconds.
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.321496) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3105267 bytes OK
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.321516) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.323327) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.323344) EVENT_LOG_v1 {"time_micros": 1764395116323338, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.323363) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3388375, prev total WAL file size 3388375, number of live WAL files 2.
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.324517) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3032KB)], [59(9974KB)]
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116324580, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13318741, "oldest_snapshot_seqno": -1}
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5853 keys, 8814161 bytes, temperature: kUnknown
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116373504, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8814161, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8774342, "index_size": 24093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 146251, "raw_average_key_size": 24, "raw_value_size": 8668741, "raw_average_value_size": 1481, "num_data_blocks": 988, "num_entries": 5853, "num_filter_entries": 5853, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764395116, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.373749) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8814161 bytes
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.375130) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 271.6 rd, 179.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.7 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(7.1) write-amplify(2.8) OK, records in: 6863, records dropped: 1010 output_compression: NoCompression
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.375147) EVENT_LOG_v1 {"time_micros": 1764395116375139, "job": 32, "event": "compaction_finished", "compaction_time_micros": 49034, "compaction_time_cpu_micros": 19910, "output_level": 6, "num_output_files": 1, "total_output_size": 8814161, "num_input_records": 6863, "num_output_records": 5853, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116375632, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395116377356, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.324423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.377445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.377451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.377453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.377455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:45:16 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:45:16.377457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:45:17 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1344424986' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 05:45:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:18 compute-0 ceph-mon[75176]: pgmap v1258: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:19 compute-0 podman[275327]: 2025-11-29 05:45:19.029124427 +0000 UTC m=+0.078903969 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 05:45:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:21 compute-0 ceph-mon[75176]: pgmap v1259: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:21 compute-0 ovs-vsctl[275391]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 05:45:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:22 compute-0 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 05:45:23 compute-0 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 05:45:23 compute-0 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 05:45:23 compute-0 ceph-mon[75176]: pgmap v1260: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:23 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: cache status {prefix=cache status} (starting...)
Nov 29 05:45:23 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: client ls {prefix=client ls} (starting...)
Nov 29 05:45:23 compute-0 lvm[275722]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 05:45:23 compute-0 lvm[275722]: VG ceph_vg0 finished
Nov 29 05:45:23 compute-0 lvm[275727]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 05:45:23 compute-0 lvm[275727]: VG ceph_vg1 finished
Nov 29 05:45:24 compute-0 lvm[275760]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 05:45:24 compute-0 lvm[275760]: VG ceph_vg2 finished
Nov 29 05:45:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14521 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:24 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 05:45:24 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 05:45:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14523 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:24 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 05:45:24 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 05:45:24 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 05:45:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 05:45:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1425794923' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 05:45:25 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 05:45:25 compute-0 ceph-mon[75176]: from='client.14521 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:25 compute-0 ceph-mon[75176]: pgmap v1261: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:25 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1425794923' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 05:45:25 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 05:45:25 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14529 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:25 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 05:45:25 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:45:25.352+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 05:45:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:45:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3807027424' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:45:25 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 05:45:25 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: ops {prefix=ops} (starting...)
Nov 29 05:45:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 05:45:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573578558' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 05:45:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 05:45:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457309832' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 05:45:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 05:45:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352512391' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 05:45:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 05:45:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2532092899' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 05:45:26 compute-0 ceph-mon[75176]: from='client.14523 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3807027424' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:45:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3573578558' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 05:45:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/457309832' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 05:45:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2352512391' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 05:45:26 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2532092899' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 05:45:26 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session ls {prefix=session ls} (starting...)
Nov 29 05:45:26 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: status {prefix=status} (starting...)
Nov 29 05:45:26 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14541 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 05:45:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580026601' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 05:45:27 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14545 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 05:45:27 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2396287443' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 05:45:27 compute-0 ceph-mon[75176]: from='client.14529 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:27 compute-0 ceph-mon[75176]: pgmap v1262: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:27 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1580026601' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 05:45:27 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2396287443' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 05:45:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 05:45:27 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3808788625' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 05:45:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 05:45:27 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139287242' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:45:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 05:45:27 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1517593296' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 05:45:28 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 05:45:28 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/106118019' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 05:45:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:28 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14557 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:28 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:45:28.324+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 05:45:28 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 05:45:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 05:45:29 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3904735651' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 05:45:29 compute-0 ceph-mon[75176]: from='client.14541 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:29 compute-0 ceph-mon[75176]: from='client.14545 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:29 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3808788625' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 05:45:29 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/4139287242' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:45:29 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1517593296' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 05:45:29 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/106118019' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 05:45:29 compute-0 ceph-mon[75176]: pgmap v1263: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:29 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 05:45:29 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/296138431' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 05:45:29 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14563 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:29 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:29 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:01.933157+0000)
Nov 29 05:45:29 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Nov 29 05:45:29 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:31.310787+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.1c scrub starts
Nov 29 05:45:29 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:31.324896+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.1c scrub ok
Nov 29 05:45:29 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:29 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 107) v1
Nov 29 05:45:29 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:31.310787+0000 osd.2 (osd.2) 106 : cluster [DBG] 7.1c scrub starts
Nov 29 05:45:29 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:31.324896+0000 osd.2 (osd.2) 107 : cluster [DBG] 7.1c scrub ok
Nov 29 05:45:29 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:29 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:29 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:29 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:29 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:02.933374+0000)
Nov 29 05:45:29 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 767992 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:03.933542+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66437120 unmapped: 614400 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:04.933702+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:05.933896+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66461696 unmapped: 589824 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:06.934052+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:36.330492+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.2 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:36.344567+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.2 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 109) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:36.330492+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.2 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:36.344567+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.2 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:07.934236+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66469888 unmapped: 581632 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 769139 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:08.934324+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66478080 unmapped: 573440 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:09.934523+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66486272 unmapped: 565248 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:10.934734+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.991065025s of 13.020785332s, submitted: 8
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66494464 unmapped: 557056 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:11.934967+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:41.369042+0000 osd.2 (osd.2) 110 : cluster [DBG] 11.d scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:41.383191+0000 osd.2 (osd.2) 111 : cluster [DBG] 11.d scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 111) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:41.369042+0000 osd.2 (osd.2) 110 : cluster [DBG] 11.d scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:41.383191+0000 osd.2 (osd.2) 111 : cluster [DBG] 11.d scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66502656 unmapped: 548864 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:12.935172+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:42.321325+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.11 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:42.335634+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.11 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 113) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:42.321325+0000 osd.2 (osd.2) 112 : cluster [DBG] 11.11 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:42.335634+0000 osd.2 (osd.2) 113 : cluster [DBG] 11.11 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66510848 unmapped: 540672 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 771436 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:13.935462+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:14.935628+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66519040 unmapped: 532480 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:15.935759+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:16.935903+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:46.249057+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.1 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:46.263191+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.1 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 115) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:46.249057+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.1 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:46.263191+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.1 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66535424 unmapped: 516096 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:17.936464+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66543616 unmapped: 507904 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 772583 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:18.936631+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.d scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.d scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66551808 unmapped: 499712 heap: 67051520 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:19.936848+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:49.222623+0000 osd.2 (osd.2) 116 : cluster [DBG] 8.d scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:49.236802+0000 osd.2 (osd.2) 117 : cluster [DBG] 8.d scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 117) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:49.222623+0000 osd.2 (osd.2) 116 : cluster [DBG] 8.d scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:49.236802+0000 osd.2 (osd.2) 117 : cluster [DBG] 8.d scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:20.937101+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:50.253014+0000 osd.2 (osd.2) 118 : cluster [DBG] 7.5 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:13:50.267131+0000 osd.2 (osd.2) 119 : cluster [DBG] 7.5 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 119) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:50.253014+0000 osd.2 (osd.2) 118 : cluster [DBG] 7.5 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:13:50.267131+0000 osd.2 (osd.2) 119 : cluster [DBG] 7.5 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:21.937348+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66560000 unmapped: 1540096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:22.937503+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66568192 unmapped: 1531904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 774877 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:23.937638+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66584576 unmapped: 1515520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:24.937762+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 1507328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:25.937868+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 1507328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:26.938020+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66592768 unmapped: 1507328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:27.938180+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 1499136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 774877 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:28.938331+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66600960 unmapped: 1499136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:29.938545+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.841436386s of 18.878847122s, submitted: 10
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 1490944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:30.938732+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:00.247964+0000 osd.2 (osd.2) 120 : cluster [DBG] 11.9 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:00.265577+0000 osd.2 (osd.2) 121 : cluster [DBG] 11.9 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 121) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:00.247964+0000 osd.2 (osd.2) 120 : cluster [DBG] 11.9 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:00.265577+0000 osd.2 (osd.2) 121 : cluster [DBG] 11.9 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66609152 unmapped: 1490944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:31.939006+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:01.252493+0000 osd.2 (osd.2) 122 : cluster [DBG] 7.c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:01.266626+0000 osd.2 (osd.2) 123 : cluster [DBG] 7.c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 123) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:01.252493+0000 osd.2 (osd.2) 122 : cluster [DBG] 7.c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:01.266626+0000 osd.2 (osd.2) 123 : cluster [DBG] 7.c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66625536 unmapped: 1474560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:32.939225+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66633728 unmapped: 1466368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 778319 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:33.939436+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:03.314777+0000 osd.2 (osd.2) 124 : cluster [DBG] 3.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:03.328874+0000 osd.2 (osd.2) 125 : cluster [DBG] 3.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 125) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:03.314777+0000 osd.2 (osd.2) 124 : cluster [DBG] 3.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:03.328874+0000 osd.2 (osd.2) 125 : cluster [DBG] 3.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 1458176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:34.939711+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66641920 unmapped: 1458176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:35.939878+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66650112 unmapped: 1449984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:36.940087+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 1441792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:37.940373+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66658304 unmapped: 1441792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 779466 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:38.940542+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:08.308752+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:08.322879+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 1433600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 127) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:08.308752+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:08.322879+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:39.940739+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:09.309216+0000 osd.2 (osd.2) 128 : cluster [DBG] 11.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:09.323407+0000 osd.2 (osd.2) 129 : cluster [DBG] 11.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 1433600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 129) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:09.309216+0000 osd.2 (osd.2) 128 : cluster [DBG] 11.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:09.323407+0000 osd.2 (osd.2) 129 : cluster [DBG] 11.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:40.941056+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66666496 unmapped: 1433600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:41.941303+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.001470566s of 12.036796570s, submitted: 10
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 1425408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:42.941443+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:12.285233+0000 osd.2 (osd.2) 130 : cluster [DBG] 11.b scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:12.299042+0000 osd.2 (osd.2) 131 : cluster [DBG] 11.b scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66674688 unmapped: 1425408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781762 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 131) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:12.285233+0000 osd.2 (osd.2) 130 : cluster [DBG] 11.b scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:12.299042+0000 osd.2 (osd.2) 131 : cluster [DBG] 11.b scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:43.941631+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66691072 unmapped: 1409024 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:44.941814+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:14.336464+0000 osd.2 (osd.2) 132 : cluster [DBG] 11.2 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:14.350605+0000 osd.2 (osd.2) 133 : cluster [DBG] 11.2 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 133) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:14.336464+0000 osd.2 (osd.2) 132 : cluster [DBG] 11.2 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:14.350605+0000 osd.2 (osd.2) 133 : cluster [DBG] 11.2 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66699264 unmapped: 1400832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:45.942026+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 1392640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:46.942186+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:16.329089+0000 osd.2 (osd.2) 134 : cluster [DBG] 8.2 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:16.343205+0000 osd.2 (osd.2) 135 : cluster [DBG] 8.2 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 135) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:16.329089+0000 osd.2 (osd.2) 134 : cluster [DBG] 8.2 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:16.343205+0000 osd.2 (osd.2) 135 : cluster [DBG] 8.2 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66707456 unmapped: 1392640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:47.942351+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:17.293655+0000 osd.2 (osd.2) 136 : cluster [DBG] 7.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:17.307685+0000 osd.2 (osd.2) 137 : cluster [DBG] 7.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 137) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:17.293655+0000 osd.2 (osd.2) 136 : cluster [DBG] 7.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:17.307685+0000 osd.2 (osd.2) 137 : cluster [DBG] 7.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 1384448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 785204 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:48.942527+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66715648 unmapped: 1384448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:49.942656+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:19.291134+0000 osd.2 (osd.2) 138 : cluster [DBG] 3.5 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:19.305350+0000 osd.2 (osd.2) 139 : cluster [DBG] 3.5 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 139) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:19.291134+0000 osd.2 (osd.2) 138 : cluster [DBG] 3.5 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:19.305350+0000 osd.2 (osd.2) 139 : cluster [DBG] 3.5 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 1376256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:50.942849+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 1376256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:51.943442+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66723840 unmapped: 1376256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:52.944021+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.964669228s of 11.010027885s, submitted: 10
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 787498 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:53.944548+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:23.295121+0000 osd.2 (osd.2) 140 : cluster [DBG] 3.e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:23.308985+0000 osd.2 (osd.2) 141 : cluster [DBG] 3.e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 141) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:23.295121+0000 osd.2 (osd.2) 140 : cluster [DBG] 3.e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:23.308985+0000 osd.2 (osd.2) 141 : cluster [DBG] 3.e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66740224 unmapped: 1359872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:54.945107+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 1351680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:55.945384+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:25.375742+0000 osd.2 (osd.2) 142 : cluster [DBG] 7.a scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:25.389858+0000 osd.2 (osd.2) 143 : cluster [DBG] 7.a scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 143) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:25.375742+0000 osd.2 (osd.2) 142 : cluster [DBG] 7.a scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:25.389858+0000 osd.2 (osd.2) 143 : cluster [DBG] 7.a scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66748416 unmapped: 1351680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:56.945838+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:26.341795+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.3 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:26.355955+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.3 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 145) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:26.341795+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.3 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:26.355955+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.3 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66756608 unmapped: 1343488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:57.946434+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789793 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:58.946837+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66764800 unmapped: 1335296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:59.947122+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66772992 unmapped: 1327104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:00.947413+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:30.374155+0000 osd.2 (osd.2) 146 : cluster [DBG] 8.4 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:30.388340+0000 osd.2 (osd.2) 147 : cluster [DBG] 8.4 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 147) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:30.374155+0000 osd.2 (osd.2) 146 : cluster [DBG] 8.4 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:30.388340+0000 osd.2 (osd.2) 147 : cluster [DBG] 8.4 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66781184 unmapped: 1318912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:01.947868+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:31.367359+0000 osd.2 (osd.2) 148 : cluster [DBG] 8.1b scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:31.384965+0000 osd.2 (osd.2) 149 : cluster [DBG] 8.1b scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 149) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:31.367359+0000 osd.2 (osd.2) 148 : cluster [DBG] 8.1b scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:31.384965+0000 osd.2 (osd.2) 149 : cluster [DBG] 8.1b scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 1310720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:02.948053+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66789376 unmapped: 1310720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 792088 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:03.948239+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.076447487s of 11.113275528s, submitted: 10
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66813952 unmapped: 1286144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:04.948421+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:34.408322+0000 osd.2 (osd.2) 150 : cluster [DBG] 3.11 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:34.422331+0000 osd.2 (osd.2) 151 : cluster [DBG] 3.11 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.18 deep-scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.18 deep-scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 151) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:34.408322+0000 osd.2 (osd.2) 150 : cluster [DBG] 3.11 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:34.422331+0000 osd.2 (osd.2) 151 : cluster [DBG] 3.11 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 1277952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:05.948611+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:35.403984+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.18 deep-scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:35.418112+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.18 deep-scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 153) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:35.403984+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.18 deep-scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:35.418112+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.18 deep-scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 1277952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:06.948812+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66822144 unmapped: 1277952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:07.948965+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.15 deep-scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.15 deep-scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795533 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:08.949123+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:38.403836+0000 osd.2 (osd.2) 154 : cluster [DBG] 7.15 deep-scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:38.417987+0000 osd.2 (osd.2) 155 : cluster [DBG] 7.15 deep-scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 155) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:38.403836+0000 osd.2 (osd.2) 154 : cluster [DBG] 7.15 deep-scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:38.417987+0000 osd.2 (osd.2) 155 : cluster [DBG] 7.15 deep-scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66830336 unmapped: 1269760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:09.949314+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:10.949471+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:11.949648+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:41.369542+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.1a scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:41.383643+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.1a scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 157) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:41.369542+0000 osd.2 (osd.2) 156 : cluster [DBG] 11.1a scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:41.383643+0000 osd.2 (osd.2) 157 : cluster [DBG] 11.1a scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66838528 unmapped: 1261568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:12.949885+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 796682 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:13.950048+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:14.950212+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66846720 unmapped: 1253376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:15.950357+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 1245184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:16.950512+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.979099274s of 13.024011612s, submitted: 8
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66854912 unmapped: 1245184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:17.950736+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:47.432177+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.1c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:47.446481+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.1c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 159) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:47.432177+0000 osd.2 (osd.2) 158 : cluster [DBG] 11.1c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:47.446481+0000 osd.2 (osd.2) 159 : cluster [DBG] 11.1c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 797831 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:18.951000+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:19.951343+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66871296 unmapped: 1228800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:20.951569+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 1220608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:21.951786+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:51.372806+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:51.386919+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 161) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:51.372806+0000 osd.2 (osd.2) 160 : cluster [DBG] 11.1e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:51.386919+0000 osd.2 (osd.2) 161 : cluster [DBG] 11.1e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66879488 unmapped: 1220608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:22.952046+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66887680 unmapped: 1212416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 798980 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:23.952191+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66912256 unmapped: 1187840 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:24.952387+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66920448 unmapped: 1179648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:25.952545+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:55.395049+0000 osd.2 (osd.2) 162 : cluster [DBG] 11.1b scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:55.409182+0000 osd.2 (osd.2) 163 : cluster [DBG] 11.1b scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 163) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:55.395049+0000 osd.2 (osd.2) 162 : cluster [DBG] 11.1b scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:55.409182+0000 osd.2 (osd.2) 163 : cluster [DBG] 11.1b scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:26.952730+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66936832 unmapped: 1163264 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:27.952870+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:57.393533+0000 osd.2 (osd.2) 164 : cluster [DBG] 7.11 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:14:57.407651+0000 osd.2 (osd.2) 165 : cluster [DBG] 7.11 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 165) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:57.393533+0000 osd.2 (osd.2) 164 : cluster [DBG] 7.11 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:14:57.407651+0000 osd.2 (osd.2) 165 : cluster [DBG] 7.11 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66945024 unmapped: 1155072 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 801277 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:28.953066+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:29.953191+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66953216 unmapped: 1146880 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:30.953322+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.892942429s of 13.915967941s, submitted: 8
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:31.953454+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:01.348220+0000 osd.2 (osd.2) 166 : cluster [DBG] 3.16 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:01.362410+0000 osd.2 (osd.2) 167 : cluster [DBG] 3.16 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 167) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:01.348220+0000 osd.2 (osd.2) 166 : cluster [DBG] 3.16 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:01.362410+0000 osd.2 (osd.2) 167 : cluster [DBG] 3.16 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:32.953620+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66961408 unmapped: 1138688 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802425 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:33.953827+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 1130496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:34.953957+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:04.310593+0000 osd.2 (osd.2) 168 : cluster [DBG] 8.1c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:04.324673+0000 osd.2 (osd.2) 169 : cluster [DBG] 8.1c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 169) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:04.310593+0000 osd.2 (osd.2) 168 : cluster [DBG] 8.1c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:04.324673+0000 osd.2 (osd.2) 169 : cluster [DBG] 8.1c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 1130496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:35.954146+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66969600 unmapped: 1130496 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:36.954346+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 1122304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:37.954532+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66977792 unmapped: 1122304 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 803573 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:38.954747+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:39.954926+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:09.316249+0000 osd.2 (osd.2) 170 : cluster [DBG] 11.1f scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:09.330353+0000 osd.2 (osd.2) 171 : cluster [DBG] 11.1f scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 171) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:09.316249+0000 osd.2 (osd.2) 170 : cluster [DBG] 11.1f scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:09.330353+0000 osd.2 (osd.2) 171 : cluster [DBG] 11.1f scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:40.955154+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 66994176 unmapped: 1105920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:41.955305+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:42.955482+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67002368 unmapped: 1097728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 804722 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:43.955686+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.017806053s of 13.042335510s, submitted: 6
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:44.955882+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:14.390582+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:14.425908+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 173) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:14.390582+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.e scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:14.425908+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.e scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:45.956110+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:15.416948+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.6 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:15.452350+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.6 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67026944 unmapped: 1073152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 175) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:15.416948+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.6 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:15.452350+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.6 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:46.956340+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:47.956493+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807016 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:48.956615+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67035136 unmapped: 1064960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:49.956755+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1056768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:50.956891+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1056768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:51.957079+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67043328 unmapped: 1056768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:52.957228+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 808163 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:53.957437+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:23.363706+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.7 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:23.399440+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.7 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67051520 unmapped: 1048576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 177) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:23.363706+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.7 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:23.399440+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.7 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:54.957703+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67059712 unmapped: 1040384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:55.957892+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.996475220s of 12.016470909s, submitted: 6
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67076096 unmapped: 1024000 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:56.958049+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:26.407253+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.17 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:26.435586+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.17 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67084288 unmapped: 1015808 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 179) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:26.407253+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.17 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:26.435586+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.17 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:57.958375+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67092480 unmapped: 1007616 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810458 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:58.958515+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:28.368947+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.f scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:28.407734+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.f scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67100672 unmapped: 999424 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:59.958699+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 181) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:28.368947+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.f scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:28.407734+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.f scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:00.958944+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67108864 unmapped: 991232 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:01.959172+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:31.315390+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:31.361229+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 183) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:31.315390+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.8 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:31.361229+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.8 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67117056 unmapped: 983040 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:02.959460+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811605 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:03.959623+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67125248 unmapped: 974848 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:04.959801+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:05.959963+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:35.337138+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.18 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:35.368884+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.18 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 185) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:35.337138+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.18 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:35.368884+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.18 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:06.960249+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67133440 unmapped: 966656 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:07.960423+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:08.960599+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812753 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:09.960744+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:10.960910+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67141632 unmapped: 958464 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.752367973s of 14.781843185s, submitted: 8
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:11.961082+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:41.188948+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:41.220754+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 187) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:41.188948+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.c scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:41.220754+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.c scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:12.961338+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:42.141049+0000 osd.2 (osd.2) 188 : cluster [DBG] 6.f scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:42.165587+0000 osd.2 (osd.2) 189 : cluster [DBG] 6.f scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67149824 unmapped: 950272 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 189) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:42.141049+0000 osd.2 (osd.2) 188 : cluster [DBG] 6.f scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:42.165587+0000 osd.2 (osd.2) 189 : cluster [DBG] 6.f scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:13.961603+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815047 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:14.961754+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:15.961896+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67158016 unmapped: 942080 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:16.962095+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:17.962258+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67166208 unmapped: 933888 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:18.962464+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:48.223231+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.13 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:48.255219+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.13 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67174400 unmapped: 925696 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 816195 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 191) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:48.223231+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.13 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:48.255219+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.13 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:19.962634+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:49.229295+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.19 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  will send 2025-11-29T05:15:49.282207+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.19 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client handle_log_ack log(last 193) v1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:49.229295+0000 osd.2 (osd.2) 192 : cluster [DBG] 9.19 scrub starts
Nov 29 05:45:30 compute-0 ceph-osd[91343]: log_client  logged 2025-11-29T05:15:49.282207+0000 osd.2 (osd.2) 193 : cluster [DBG] 9.19 scrub ok
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:20.962811+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67182592 unmapped: 917504 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:21.963023+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:22.963176+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67190784 unmapped: 909312 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:23.963335+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:24.963453+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67198976 unmapped: 901120 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:25.963584+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67207168 unmapped: 892928 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:26.963746+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:27.963903+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:28.964077+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67215360 unmapped: 884736 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:29.964247+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:30.964437+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67223552 unmapped: 876544 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:31.964586+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:32.964701+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:33.964830+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67231744 unmapped: 868352 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:34.964962+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:35.965128+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67239936 unmapped: 860160 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:36.965290+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:37.965427+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:38.965616+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67248128 unmapped: 851968 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:39.965767+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:40.965913+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67256320 unmapped: 843776 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:41.966099+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:42.966348+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:43.966574+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67264512 unmapped: 835584 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:44.966832+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:45.966980+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:46.967214+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67272704 unmapped: 827392 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:47.967428+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:48.967609+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:49.967772+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:50.967961+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67280896 unmapped: 819200 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:51.968147+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:52.968286+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:53.968410+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:54.968540+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:55.968722+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67289088 unmapped: 811008 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:56.968865+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:57.969036+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:58.969241+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67297280 unmapped: 802816 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:59.969477+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:00.969775+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:01.970055+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67305472 unmapped: 794624 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:02.970187+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:03.970381+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67313664 unmapped: 786432 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:04.970580+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:05.970736+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:06.970971+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67321856 unmapped: 778240 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:07.971129+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 770048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:08.971316+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67330048 unmapped: 770048 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:09.971532+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 761856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:10.971704+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 761856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:11.971919+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67338240 unmapped: 761856 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:12.972103+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 753664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:13.972340+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67346432 unmapped: 753664 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:14.972510+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:15.972642+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:16.972771+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67354624 unmapped: 745472 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:17.972942+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:18.973063+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:19.973252+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:20.973458+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67362816 unmapped: 737280 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:21.973692+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:22.973909+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67371008 unmapped: 729088 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:23.974122+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:24.974315+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:25.974479+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67379200 unmapped: 720896 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:26.974693+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:27.974911+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67387392 unmapped: 712704 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:28.975186+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:29.976533+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:30.976708+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67395584 unmapped: 704512 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:31.977199+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:32.977881+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67403776 unmapped: 696320 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:33.978741+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:34.978883+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67411968 unmapped: 688128 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:35.979034+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:36.979240+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:37.979420+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:38.979576+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67420160 unmapped: 679936 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:39.979836+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:40.980112+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67428352 unmapped: 671744 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:41.980280+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:42.980450+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:43.980602+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 663552 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:44.980871+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:45.981021+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67444736 unmapped: 655360 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:46.981176+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:47.981352+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:48.981532+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:49.981670+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67452928 unmapped: 647168 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:50.981801+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:51.981966+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67461120 unmapped: 638976 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:52.982147+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:53.982350+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67469312 unmapped: 630784 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:54.982505+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:55.982614+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:56.982775+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 622592 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:57.982887+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:58.983004+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:59.983105+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:00.983211+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67485696 unmapped: 614400 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:01.983328+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:02.983468+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67493888 unmapped: 606208 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:03.983588+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:04.983765+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67502080 unmapped: 598016 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:05.983879+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:06.984080+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:07.984381+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67510272 unmapped: 589824 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:08.984608+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:09.984791+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:10.984940+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67518464 unmapped: 581632 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:11.985102+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:12.985347+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67526656 unmapped: 573440 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:13.985528+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67534848 unmapped: 565248 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:14.985676+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:15.985800+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67543040 unmapped: 557056 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:16.985962+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:17.986126+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67551232 unmapped: 548864 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:18.986248+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:19.986490+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:20.986630+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67559424 unmapped: 540672 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:21.986837+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:22.987001+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:23.987127+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67567616 unmapped: 532480 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:24.987280+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 524288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:25.987426+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67575808 unmapped: 524288 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:26.987588+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:27.987712+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:28.987843+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67584000 unmapped: 516096 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:29.987951+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:30.988081+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:31.988241+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67592192 unmapped: 507904 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:32.988321+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:33.988708+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:34.988868+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67600384 unmapped: 499712 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:35.989011+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:36.989495+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67608576 unmapped: 491520 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:37.989978+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 483328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:38.990332+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 483328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:39.990456+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67616768 unmapped: 483328 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:40.990599+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 475136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:41.990854+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 475136 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:42.991000+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:43.991110+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:44.991342+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 466944 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:45.991465+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 458752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:46.991620+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 458752 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:47.991754+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:48.991914+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:49.992079+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67649536 unmapped: 450560 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:50.992243+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 442368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:51.992465+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 442368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:52.992628+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67657728 unmapped: 442368 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:53.992775+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:54.992910+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:55.993066+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67665920 unmapped: 434176 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:56.993194+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 425984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:57.993352+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 425984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:58.993800+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 425984 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:59.993943+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 417792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:00.994076+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67682304 unmapped: 417792 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:01.994336+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:02.994561+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:03.994725+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67690496 unmapped: 409600 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:04.994881+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:05.995039+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67698688 unmapped: 401408 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:06.995212+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:07.995353+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:08.995501+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:09.995670+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 393216 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:10.995820+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:11.995982+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:12.996130+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 376832 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:13.996315+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 368640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:14.996455+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 368640 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:15.996598+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 360448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:16.996774+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 360448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:17.996913+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67739648 unmapped: 360448 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:18.997075+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 352256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:19.997200+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67747840 unmapped: 352256 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:20.997318+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 344064 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:21.997453+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 344064 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:22.997573+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67756032 unmapped: 344064 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:23.997703+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 335872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:24.997873+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 335872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:25.997994+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67764224 unmapped: 335872 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:26.998133+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 327680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:27.998308+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 327680 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:28.998469+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:29.998581+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67780608 unmapped: 319488 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:30.998717+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67788800 unmapped: 311296 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:31.998867+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:32.999009+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:33.999186+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67796992 unmapped: 303104 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:34.999351+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:35.999538+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67805184 unmapped: 294912 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:36.999690+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:38.000200+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:39.000548+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67813376 unmapped: 286720 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:40.000705+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:41.001008+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:42.001472+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 278528 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:43.001636+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:44.001794+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:45.002120+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 270336 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:46.002257+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:47.002409+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67837952 unmapped: 262144 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:48.002555+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:49.002790+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:50.002967+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67846144 unmapped: 253952 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:51.003132+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:52.003332+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:53.003616+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67854336 unmapped: 245760 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:54.003808+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:55.004020+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67862528 unmapped: 237568 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:56.004149+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:57.004329+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 229376 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:58.004598+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:59.004801+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:00.004961+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 221184 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:01.005155+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:02.005363+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:03.005518+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67895296 unmapped: 204800 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:04.005696+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:05.005847+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:06.005981+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 196608 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:07.006120+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:08.006302+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:09.006477+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 188416 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:10.006592+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:11.006744+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:12.006944+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67919872 unmapped: 180224 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:13.007072+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:14.007197+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 172032 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:15.007316+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:16.007433+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 155648 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:17.007581+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5451 writes, 23K keys, 5451 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5451 writes, 770 syncs, 7.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5451 writes, 23K keys, 5451 commit groups, 1.0 writes per commit group, ingest: 18.29 MB, 0.03 MB/s
                                           Interval WAL: 5451 writes, 770 syncs, 7.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68009984 unmapped: 90112 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:18.007713+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:19.007809+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:20.007956+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68018176 unmapped: 81920 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:21.008070+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:22.008348+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68026368 unmapped: 73728 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:23.008508+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:24.008623+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:25.008743+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68034560 unmapped: 65536 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:26.008842+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:27.008941+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:28.009041+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 57344 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:29.009183+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:30.009315+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68050944 unmapped: 49152 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:31.009442+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:32.009580+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:33.009696+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 40960 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:34.009813+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:35.009943+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 32768 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:36.010063+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:37.010204+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:38.010341+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 24576 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:39.010515+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:40.010694+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:41.010856+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68083712 unmapped: 16384 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:42.011098+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:43.011387+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:44.011930+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 8192 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:45.012158+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:46.012316+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68100096 unmapped: 0 heap: 68100096 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:47.012593+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:48.012789+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:49.012954+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68108288 unmapped: 1040384 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:50.013110+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:51.013241+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:52.013520+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68116480 unmapped: 1032192 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:53.013712+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:54.013964+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:55.014213+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:56.014390+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68124672 unmapped: 1024000 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:57.014566+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:58.014731+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:59.014876+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68132864 unmapped: 1015808 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:00.015054+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:01.015211+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:02.015366+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68141056 unmapped: 1007616 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:03.015512+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:04.015703+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 999424 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:05.015842+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:06.015960+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:07.016110+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68157440 unmapped: 991232 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:08.016359+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:09.016511+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68165632 unmapped: 983040 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:10.016706+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68173824 unmapped: 974848 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.014465332s of 299.041870117s, submitted: 8
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:11.016832+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68190208 unmapped: 958464 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:12.017042+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:13.017242+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:14.017394+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68206592 unmapped: 942080 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:15.017528+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:16.017655+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:17.017752+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68214784 unmapped: 933888 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:18.017876+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 925696 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:19.018014+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68222976 unmapped: 925696 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:20.018170+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:21.018341+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:22.018536+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:23.018672+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:24.018807+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:25.018941+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:26.019083+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:27.019223+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:28.019317+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:29.019429+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:30.019568+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:31.019731+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:32.019930+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:33.020113+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:34.073894+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:35.074040+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:36.074185+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:37.074336+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:38.074516+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:39.074717+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:40.074853+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:41.075011+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:42.075204+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:43.075316+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:44.075488+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:45.075632+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:46.075764+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:47.075888+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:48.076028+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:49.076166+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:50.076359+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:51.076556+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:52.076812+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:53.076979+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:54.077216+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:55.077391+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:56.077578+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:57.077746+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:58.077892+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:59.078018+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:00.078243+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:01.078493+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:02.078731+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:03.078903+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:04.079066+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:05.079202+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:06.079343+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 761856 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:07.079468+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 761856 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:08.079603+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:09.079764+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:10.079933+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:11.080128+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 745472 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:12.080354+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 737280 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:13.080554+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 737280 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:14.080698+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:15.080862+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:16.080975+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:17.081087+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:18.081309+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:19.081495+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:20.081627+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:21.081814+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:22.081989+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:23.082097+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:24.082235+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:25.082317+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:26.082465+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:27.082617+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:28.082747+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:29.082868+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:30.082978+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:31.083104+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:32.083295+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:33.083498+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:34.083653+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:35.083790+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:36.083930+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:37.084082+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:38.084211+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:39.084316+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:40.084485+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:41.084645+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:42.084860+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:43.085047+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:44.085218+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:45.085329+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:46.085463+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:47.085603+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:48.085802+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:49.085999+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:50.086228+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:51.086428+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:52.086639+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:53.086763+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:54.086894+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:55.087016+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:56.087133+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:57.087279+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:58.087413+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:59.087549+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:00.087680+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:01.087827+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:02.088034+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:03.088132+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:04.088256+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:05.088374+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:06.088523+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:07.088646+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:08.088778+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:09.088934+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:10.089064+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:11.089190+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:12.089350+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:13.089505+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:14.089658+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:15.089783+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:16.089940+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:17.090167+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:18.090306+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:19.090460+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:20.090617+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:21.090767+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:22.090922+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:23.091070+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:24.091696+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:25.091858+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:26.091996+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:27.092168+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:28.092905+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:29.093345+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:30.093549+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:31.093817+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:32.094087+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:33.094430+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:34.094741+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:35.095109+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:36.095315+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:37.095451+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:38.095583+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:39.095735+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:40.095907+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:41.096105+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:42.096305+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:43.096422+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:44.096558+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:45.096701+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:46.096817+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:47.097108+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:48.097372+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:49.097586+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:50.097698+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:51.097836+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:52.098025+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:53.098192+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:54.098363+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:55.098480+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:56.098614+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:57.098845+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:58.099025+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:59.099355+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:00.099595+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:01.099792+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:02.100061+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:03.100332+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:04.100483+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:05.100620+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:06.100795+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:07.101025+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:08.101238+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:09.101430+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:10.101607+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:11.101801+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:12.102052+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:13.102389+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:14.102580+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:15.102737+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:16.102902+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:17.103205+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:18.103456+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:19.103708+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:20.103940+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:21.104200+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:22.104505+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:23.104721+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:24.104883+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:25.105008+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:26.105214+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:27.105470+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:28.105630+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:29.105819+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:30.105951+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:31.106103+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:32.106405+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:33.106618+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:34.106811+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:35.106964+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:36.107178+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:37.107342+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:38.107557+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:39.107731+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:40.107863+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:41.108067+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:42.108391+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:43.108558+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:44.108705+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:45.123319+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:46.123483+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:47.123733+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:48.123955+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:49.124139+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:50.124383+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:51.124570+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:52.124816+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:53.124968+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:54.125112+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:55.125292+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:56.125556+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:57.125858+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:58.126032+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:59.126180+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:00.126372+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:01.126635+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:02.126848+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:03.126993+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:04.127236+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:05.127477+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:06.127672+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:07.127826+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:08.127985+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:09.128165+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:10.128303+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:11.128443+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:12.128610+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:13.128732+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:14.128836+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:15.128929+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:16.129062+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:17.129209+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:18.129332+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:19.129476+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:20.129654+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:21.129937+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:22.130097+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:23.130227+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:24.130477+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc ms_handle_reset ms_handle_reset con 0x557761d1dc00
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: get_auth_request con 0x557764265800 auth_method 0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_configure stats_period=5
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:25.130624+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:26.130768+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:27.130943+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:28.131059+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:29.131223+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:30.131395+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:31.131653+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:32.131917+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:33.132097+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:34.132234+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:35.132362+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:36.132575+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:37.132806+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:38.132999+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:39.133245+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:40.133513+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:41.133616+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:42.133761+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:43.133920+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:44.134107+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:45.134218+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:46.134323+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:47.134432+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:48.134612+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:49.134809+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:50.135019+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:51.135183+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:52.135391+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:53.135511+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:54.135667+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:55.135823+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:56.135942+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:57.136078+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:58.136226+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:59.136351+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:00.136467+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:01.136579+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:02.136960+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:03.137140+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:04.137340+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:05.137501+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:06.137697+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:07.137838+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:08.137988+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:09.138108+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:10.138324+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:11.138543+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:12.138725+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:13.138839+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:14.139055+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:15.139210+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:16.139363+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:17.139504+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:18.139665+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:19.139814+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:20.139980+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:21.140148+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:22.140373+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:23.140552+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:24.140686+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:25.140898+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:26.141149+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:27.141345+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:28.141507+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:29.141650+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:30.141813+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:31.141997+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:32.142880+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:33.143030+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:34.143160+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:35.143357+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:36.143499+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:37.143645+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:38.143845+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:39.144029+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:40.144232+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:41.144373+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:42.144557+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:43.144676+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:44.144817+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:45.144956+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:46.145096+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:47.145254+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:48.145487+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:49.145680+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:50.145865+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:51.146092+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:52.146356+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:53.146501+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:54.146704+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:55.146861+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:56.147055+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:57.147198+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:58.147363+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:59.147480+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:00.147604+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:01.147768+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:02.147945+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:03.148056+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:04.148247+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:05.148438+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:06.148605+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:07.148747+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:08.148860+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:09.149180+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:10.149496+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:11.149814+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:12.151061+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:13.151295+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:14.151632+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:15.151823+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:16.152051+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:17.152251+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:18.152502+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:19.152730+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:20.152986+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:21.153203+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:22.153568+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:23.153800+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:24.154038+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:25.154309+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:26.154516+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:27.154701+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:28.154849+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:29.155034+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:30.155192+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:31.155322+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:32.155613+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:33.155832+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:34.156026+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:35.156165+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:36.156380+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:37.156499+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:38.156630+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:39.156804+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:40.156997+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:41.157148+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:42.157756+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:43.157924+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:44.158041+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:45.158187+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:46.158345+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:47.158490+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:48.158673+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:49.158910+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:50.159088+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:51.159300+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:52.159596+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:53.159848+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:54.160144+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:55.160305+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:56.160506+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:57.160841+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:58.161120+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:59.161410+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:00.161702+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:01.161952+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:02.162223+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:03.162429+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:04.162658+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:05.162906+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:06.163225+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:07.163567+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:08.163828+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:09.164026+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:10.164246+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:11.164468+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:12.164709+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:13.165017+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:14.165421+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:15.165619+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 335872 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:16.165769+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:17.165908+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:18.166070+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:19.166467+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:20.166659+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:21.166872+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:22.167103+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:23.167383+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:24.167644+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:25.167906+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:26.168120+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:27.168444+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:28.168596+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:29.168776+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:30.168994+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:31.169215+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:32.169431+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:33.169622+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:34.169803+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:35.169986+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:36.193432+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:37.193690+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:38.193987+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:39.194235+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:40.194525+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:41.194768+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:42.195066+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:43.195399+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:44.195628+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:45.195950+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:46.196258+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:47.196503+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:48.196661+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:49.196861+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:50.197065+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:51.197340+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:52.197729+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:53.197992+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:54.198156+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:55.198335+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:56.198446+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:57.198635+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:58.198818+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:59.199023+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:00.199160+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:01.199345+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:02.199538+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:03.199729+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:04.199924+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:05.200112+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:06.200326+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:07.200486+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:08.200622+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:09.200744+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:10.200917+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 417792 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:11.201091+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:12.201236+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14567 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:13.201407+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:14.201564+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:15.201720+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:16.201884+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:17.202022+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:18.202155+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:19.202371+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:20.202500+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:21.202629+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:22.202801+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:23.202936+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:24.203090+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:25.203224+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:26.203369+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:27.203535+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:28.203819+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:29.204061+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:30.204370+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:31.204600+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:32.204869+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:33.205178+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:34.205426+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:35.205705+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:36.205907+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:37.206137+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:38.206338+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:39.206607+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:40.206820+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:41.207036+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:42.207298+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:43.207471+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:44.207623+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:45.207781+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:46.207916+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:47.208067+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:48.208230+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:49.208487+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:50.208716+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:51.208961+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:52.209259+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:53.209518+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:54.209693+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:55.209891+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:56.210025+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:57.210189+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:58.210372+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:59.210516+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:00.210696+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:01.210878+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:02.211039+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:03.211224+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:04.211386+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:05.211571+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:06.211754+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:07.211910+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:08.212208+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:09.212389+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:10.212610+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:11.212837+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:12.213102+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:13.213309+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:14.213487+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:15.213660+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:16.213819+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:17.213986+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5631 writes, 23K keys, 5631 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5631 writes, 860 syncs, 6.55 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:18.214166+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:19.214384+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:20.214599+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:21.214775+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:22.215011+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:23.215161+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:24.215343+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:25.215503+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:26.215662+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:27.215843+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:28.215990+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:29.216110+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:30.216375+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:31.216525+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:32.216734+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:33.216871+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:34.217015+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:35.217158+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:36.217326+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:37.217508+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:38.217782+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:39.217981+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:40.218152+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:41.218348+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:42.218556+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:43.218711+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:44.218885+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:45.219084+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:46.219319+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:47.219494+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:48.219652+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:49.219812+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:50.219968+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:51.220141+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:52.220335+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:53.220669+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:54.220848+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:55.221133+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:56.221399+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:57.221674+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:58.221913+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:59.222058+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:00.222227+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:01.222439+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:02.222672+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:03.222885+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:04.223047+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:05.223203+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:06.223367+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:07.223563+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:08.223805+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:09.224001+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:10.224173+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.950073242s of 600.213012695s, submitted: 90
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 245760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:11.224307+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:12.224467+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:13.224636+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:14.224863+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:15.225057+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:16.225198+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:17.225365+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:18.225571+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:19.225781+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:20.225936+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:21.226086+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:22.226259+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:23.226422+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:24.226611+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:25.226766+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:26.226951+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:27.227073+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:28.227246+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:29.227495+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:30.227641+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:31.227784+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:32.228016+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:33.228183+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:34.228386+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:35.228521+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:36.228654+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:37.228814+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:38.228957+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:39.229098+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:40.229246+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:41.229466+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:42.229663+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:43.229795+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:44.229939+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:45.230063+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:46.230230+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:47.230444+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:48.230653+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:49.230862+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:50.231075+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:51.231227+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:52.231421+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:53.231624+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:54.231787+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:55.232026+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:56.232188+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:57.232384+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:58.232576+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:59.232725+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:00.232858+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:01.233028+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:02.233203+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:03.233400+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:04.233578+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:05.233719+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:06.233873+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:07.234035+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:08.234167+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:09.234349+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:10.234523+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:11.234699+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:12.234879+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:13.235061+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:14.235213+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:15.235432+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:16.235630+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:17.235809+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:18.236020+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:19.236230+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:20.236449+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:21.236632+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:22.236811+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:23.237010+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:24.237158+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:25.237328+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:26.237492+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:27.237627+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:28.237775+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:29.238221+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:30.238562+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:31.238748+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:32.238994+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:33.239193+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:34.239380+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:35.239682+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:36.239830+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:37.240065+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:38.240340+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:39.240566+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:40.240749+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:41.240903+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:42.241073+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:43.241360+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:44.241538+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:45.241694+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:46.241851+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:47.241991+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:48.242113+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:49.242335+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:50.242555+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:51.242710+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:52.242961+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:53.243120+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:54.243310+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:55.243483+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:56.243655+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:57.243858+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:58.244079+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:59.244249+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:00.244485+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:01.244680+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:02.245039+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:03.245342+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:04.245572+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:05.245837+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:06.246165+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:07.246425+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:08.246656+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:09.246855+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:10.247027+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:11.247215+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:12.247594+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:13.247889+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:14.248141+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:15.248350+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:16.248563+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:17.248839+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:18.249047+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:19.249336+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:20.249641+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:21.249844+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:22.250074+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:23.250249+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:24.250458+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:25.250595+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:26.250828+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:27.250989+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:28.251180+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:29.251373+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:30.251546+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:31.251767+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:32.252048+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:33.252327+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:34.252496+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:35.252675+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:36.252843+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:37.252989+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:38.253215+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:39.253421+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:40.253582+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:41.253932+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:42.254314+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:43.254582+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:44.254755+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:45.254891+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:46.255101+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:47.255339+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:48.255524+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:49.255709+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:50.255883+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:51.256094+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:52.256238+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:53.256402+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:54.256562+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:55.256722+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:56.256894+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:57.257065+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:58.257219+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:59.257365+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:00.257510+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:01.257717+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:02.257925+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:03.258120+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:04.258306+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:05.258580+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:06.258817+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:07.259016+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:08.259256+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:09.259482+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:10.259651+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:11.259960+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:12.260188+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:13.260364+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:14.260525+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:15.260665+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:16.260947+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:17.261256+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:18.261539+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:19.261731+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:20.261941+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:21.262210+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:22.262538+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:23.262680+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:24.262792+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:25.262914+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:26.263355+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:27.263560+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:28.263822+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:29.264111+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557763f08000
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:30.264338+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 200.325714111s of 200.562088013s, submitted: 90
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:31.264529+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:32.264705+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 17432576 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916268 data_alloc: 218103808 data_used: 180224
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 123 ms_handle_reset con 0x557763f08000 session 0x5577631b30e0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:33.264882+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 17440768 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557765b97c00
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:34.265085+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 17481728 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:35.265233+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 124 ms_handle_reset con 0x557765b97c00 session 0x557765010000
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbe39000/0x0/0x4ffc00000, data 0xd2e970/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 17408000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:36.265420+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:37.265632+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925293 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:38.265860+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:39.266071+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:40.266322+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:41.266568+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbe38000/0x0/0x4ffc00000, data 0xd2e993/0xde4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.306422234s of 10.512654305s, submitted: 45
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:42.266786+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:43.266966+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:44.267166+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:45.267366+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:46.267560+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:47.267761+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:48.267916+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:49.268131+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:50.268361+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:51.268554+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:52.268808+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:53.269026+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557765b96000
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.103341103s of 12.113625526s, submitted: 13
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:54.269228+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 10
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:55.269377+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:56.269591+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:57.269730+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929899 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:58.269935+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:59.270106+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:00.270252+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:01.270399+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:02.270651+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929899 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 11
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:03.270826+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557765b96400
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.626058578s of 10.632491112s, submitted: 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:04.270999+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:05.271145+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:06.271319+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:07.271453+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929723 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:08.271598+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:09.271777+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:10.272240+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:11.272612+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:12.272897+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929723 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:13.273114+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:14.273248+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:15.273482+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:16.273639+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.001748085s of 12.013872147s, submitted: 4
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:17.274080+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928857 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:18.274339+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:19.274482+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:20.274674+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:21.274891+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:22.275029+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928857 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:23.275156+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:24.275299+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:25.275485+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:26.276446+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:27.276750+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930625 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:28.276995+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.040862083s of 12.053675652s, submitted: 4
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:29.277233+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:30.277460+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:31.277637+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:32.277917+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928167 data_alloc: 218103808 data_used: 184320
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:33.278094+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:34.278233+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:35.278418+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 125 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:36.278627+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:37.278759+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931461 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:38.278878+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:39.279016+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:40.279177+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:41.279486+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:42.279691+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931461 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:43.279814+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:44.279993+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.859819412s of 16.871786118s, submitted: 28
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:45.280172+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557765b96800
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 17309696 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:46.280301+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 17309696 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 12
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:47.280425+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 17235968 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939171 data_alloc: 218103808 data_used: 200704
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:48.280592+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 17219584 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:49.280712+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe2e000/0x0/0x4ffc00000, data 0xd33b54/0xdee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 17219584 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:50.280823+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:51.280940+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:52.281073+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:53.281176+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:54.281335+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:55.281430+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:56.281555+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:57.281698+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:58.281844+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:59.282008+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:00.282134+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:01.282212+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:02.282404+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:03.282568+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.211776733s of 18.236698151s, submitted: 18
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:04.282714+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fbe29000/0x0/0x4ffc00000, data 0xd372c6/0xdf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:05.282827+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:06.282937+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 17104896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:07.283085+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fbe25000/0x0/0x4ffc00000, data 0xd38edc/0xdf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 17104896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951491 data_alloc: 218103808 data_used: 208896
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:08.283204+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fbe23000/0x0/0x4ffc00000, data 0xd3aaf2/0xdfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:09.283341+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:10.283461+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:11.283642+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe1f000/0x0/0x4ffc00000, data 0xd3c793/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:12.283789+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955423 data_alloc: 218103808 data_used: 212992
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:13.283901+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.339121819s of 10.671369553s, submitted: 123
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:14.284046+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:15.284242+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe20000/0x0/0x4ffc00000, data 0xd3c793/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 17047552 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:16.284481+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 15990784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:17.284648+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe19000/0x0/0x4ffc00000, data 0xd3fd71/0xe03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 15990784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961321 data_alloc: 218103808 data_used: 221184
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:18.284770+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:19.284950+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:20.285079+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:21.285193+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe1c000/0x0/0x4ffc00000, data 0xd3fcd6/0xe02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:22.285336+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:23.285472+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959079 data_alloc: 218103808 data_used: 221184
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:24.285603+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.049346924s of 10.169968605s, submitted: 40
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:25.285734+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:26.286176+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:27.286382+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:28.286588+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965021 data_alloc: 218103808 data_used: 229376
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:29.286705+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:30.286889+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:31.287099+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:32.287337+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 15917056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:33.287467+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 15884288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964141 data_alloc: 218103808 data_used: 229376
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe18000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe18000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:34.287664+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 15892480 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:35.287769+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 15892480 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.991994858s of 11.068979263s, submitted: 40
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:36.287880+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:37.287988+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe14000/0x0/0x4ffc00000, data 0xd433da/0xe09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:38.288143+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968139 data_alloc: 218103808 data_used: 237568
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:39.288353+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 15884288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:40.288450+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:41.288588+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:42.288758+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:43.289009+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970423 data_alloc: 218103808 data_used: 237568
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:44.289146+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:45.289341+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:46.289595+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:47.289800+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.993079185s of 12.021212578s, submitted: 14
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:48.290245+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 15851520 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970423 data_alloc: 218103808 data_used: 237568
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:49.290487+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 15851520 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:50.290688+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:51.290915+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:52.291215+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:53.291337+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:54.291495+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:55.291711+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:56.291952+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:57.292215+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.520608902s of 10.532555580s, submitted: 3
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:58.292462+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:59.292679+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:00.292898+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:01.293100+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:02.293337+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:03.293512+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:04.293642+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:05.293834+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:06.294028+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:07.294215+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:08.294360+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:09.294557+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:10.294768+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 15802368 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:11.294910+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.498485565s of 13.504686356s, submitted: 2
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 ms_handle_reset con 0x557765b96800 session 0x557764f4fe00
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:12.295092+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 13
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:13.295219+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971135 data_alloc: 218103808 data_used: 237568
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:14.295391+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:15.295510+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd44f73/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:16.295670+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:17.295824+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:18.295955+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 14983168 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974495 data_alloc: 218103808 data_used: 237568
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd44f73/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:19.296131+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:20.296248+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0a000/0x0/0x4ffc00000, data 0xd48629/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:21.296485+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:22.296727+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0a000/0x0/0x4ffc00000, data 0xd48629/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:23.296861+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.806947708s of 11.988073349s, submitted: 235
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980069 data_alloc: 218103808 data_used: 245760
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:24.297032+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0b000/0x0/0x4ffc00000, data 0xd4858e/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:25.297192+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:26.297343+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:27.297498+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:28.297722+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:29.297897+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:30.298085+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:31.298301+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:32.298539+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:33.298683+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:34.298898+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:35.299150+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:36.299357+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:37.299588+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:38.299725+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:39.299845+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:40.300016+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:41.300166+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:42.300418+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:43.300602+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:44.300788+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:45.300967+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:46.301139+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:47.301346+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.121107101s of 24.133726120s, submitted: 13
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd4a08c/0xe15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:48.301529+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984139 data_alloc: 218103808 data_used: 245760
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:49.301652+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:50.301817+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd4a127/0xe16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:51.301962+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:52.302152+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:53.302288+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986619 data_alloc: 218103808 data_used: 245760
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd4a186/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:54.302478+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:55.302612+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd4a186/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:56.302752+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:57.302846+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:58.302958+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986571 data_alloc: 218103808 data_used: 245760
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:59.303120+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.074976921s of 12.103597641s, submitted: 7
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:00.303219+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe06000/0x0/0x4ffc00000, data 0xd4a157/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:01.303376+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:02.303498+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:03.303619+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988043 data_alloc: 218103808 data_used: 245760
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe06000/0x0/0x4ffc00000, data 0xd4a185/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:04.303765+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 140 handle_osd_map epochs [141,142], i have 140, src has [1,142]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:05.303889+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:06.304012+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:07.304128+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:08.304381+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993155 data_alloc: 218103808 data_used: 253952
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe01000/0x0/0x4ffc00000, data 0xd4d8db/0xe1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:09.304667+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:10.304839+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.097406387s of 11.276707649s, submitted: 61
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:11.304939+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:12.305172+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:13.305361+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997329 data_alloc: 218103808 data_used: 262144
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:14.305551+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:15.305785+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:16.305935+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:17.306060+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:18.306251+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f3f4/0xe20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998041 data_alloc: 218103808 data_used: 262144
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:19.306453+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:20.306589+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.991775513s of 10.043452263s, submitted: 26
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:21.306721+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 13819904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:22.306889+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:23.307003+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000645 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:24.307140+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:25.307384+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:26.307524+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e25/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:27.307674+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:28.307805+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002413 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:29.308145+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50f7f/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:30.308318+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:31.308454+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:32.308626+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.350621223s of 11.425502777s, submitted: 31
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:33.308764+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf5000/0x0/0x4ffc00000, data 0xd51047/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007621 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:34.308916+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:35.309078+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:36.309238+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 12541952 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:37.309364+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf2000/0x0/0x4ffc00000, data 0xd511a7/0xe28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 12517376 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:38.309501+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 12517376 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011157 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:39.309717+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 12509184 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:40.309855+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 12460032 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:41.309976+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 12451840 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf3000/0x0/0x4ffc00000, data 0xd5117b/0xe28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:42.310134+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 12451840 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:43.310317+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.751610756s of 11.044014931s, submitted: 37
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 12419072 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010409 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:44.310502+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:45.310729+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:46.310971+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf4000/0x0/0x4ffc00000, data 0xd510b1/0xe27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:47.311185+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 12386304 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:48.311317+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50fe8/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010199 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:49.311471+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:50.311671+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf6000/0x0/0x4ffc00000, data 0xd50fb7/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:51.311786+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:52.311976+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:53.312170+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006959 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:54.312325+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.834184647s of 10.926655769s, submitted: 30
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50dbd/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:55.312496+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:56.312637+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:57.312780+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:58.312941+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008855 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:59.313061+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:00.315401+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:01.315521+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50e84/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:02.315685+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:03.315800+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010495 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:04.315921+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.022552490s of 10.158326149s, submitted: 18
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:05.316091+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:06.316293+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e58/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:07.316418+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e58/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:08.316596+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008551 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:09.316724+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:10.316902+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:11.317032+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 12271616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:12.317243+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfc000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:13.317422+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007861 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:14.317822+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:15.318030+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:16.318216+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.966061592s of 12.095813751s, submitted: 15
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:17.318432+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50dbc/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:18.318612+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007861 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:19.318773+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:20.318930+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:21.319108+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:22.319334+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:23.319472+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009501 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:24.319632+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:25.319817+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:26.320013+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e51/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.408202171s of 10.675523758s, submitted: 17
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:27.320141+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:28.320341+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011349 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:29.320531+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50f47/0xe24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:30.320657+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:31.320811+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:32.321044+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12001280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd526bd/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:33.321195+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12001280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1018011 data_alloc: 218103808 data_used: 278528
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:34.321334+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbded000/0x0/0x4ffc00000, data 0xd5a18b/0xe2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:35.321460+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:36.321613+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:37.321712+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.418998718s of 10.583705902s, submitted: 59
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 9740288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbdc6000/0x0/0x4ffc00000, data 0xd839d6/0xe57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,3])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:38.321849+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 9740288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024267 data_alloc: 218103808 data_used: 278528
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:39.322008+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbdb9000/0x0/0x4ffc00000, data 0xd9187a/0xe64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [1])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 7217152 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:40.322105+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 6660096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:41.322237+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 6668288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:42.322414+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fabc9000/0x0/0x4ffc00000, data 0xde24b0/0xeb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 6643712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:43.322564+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028237 data_alloc: 218103808 data_used: 278528
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 6725632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:44.322719+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 6709248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fab98000/0x0/0x4ffc00000, data 0xe11be3/0xee5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:45.322835+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5660672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:46.322989+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 5603328 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:47.323114+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.068192482s of 10.000307083s, submitted: 80
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fab94000/0x0/0x4ffc00000, data 0xe13646/0xee8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5120000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:48.323255+0000)
Nov 29 05:45:30 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037963 data_alloc: 218103808 data_used: 286720
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5120000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fab47000/0x0/0x4ffc00000, data 0xe61e3d/0xf36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:49.323446+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 4784128 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:50.323567+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4300800 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:51.323692+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 4071424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:52.323857+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3792896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faaea000/0x0/0x4ffc00000, data 0xebf2b3/0xf94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:53.323979+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043289 data_alloc: 218103808 data_used: 294912
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3792896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:54.324129+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3252224 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:55.324234+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faab2000/0x0/0x4ffc00000, data 0xef3a69/0xfca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2957312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:56.324405+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 2662400 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:57.324583+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xf07fb6/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.752583504s of 10.000064850s, submitted: 81
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 1425408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:58.324729+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048977 data_alloc: 218103808 data_used: 294912
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86614016 unmapped: 1417216 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:59.324871+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faa4a000/0x0/0x4ffc00000, data 0xf5e02d/0x1034000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:00.324972+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86704128 unmapped: 2375680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:01.325118+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 2940928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:02.325301+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2990080 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:03.325410+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa5ca000/0x0/0x4ffc00000, data 0xfc91da/0x10a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067411 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2990080 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa5ca000/0x0/0x4ffc00000, data 0xfc91da/0x10a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:04.325536+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 1630208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:05.325712+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:06.325838+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:07.325952+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.663159370s of 10.000439644s, submitted: 117
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 2211840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:08.326074+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080451 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 3194880 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557763f08000
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:09.326175+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa4ea000/0x0/0x4ffc00000, data 0x10a58fb/0x1183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 3219456 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:10.326312+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 14
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 2875392 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:11.326490+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 2875392 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa4e3000/0x0/0x4ffc00000, data 0x10afbb3/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:12.326727+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 1466368 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:13.326899+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093257 data_alloc: 218103808 data_used: 307200
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:14.327054+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89341952 unmapped: 1835008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:15.327180+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa436000/0x0/0x4ffc00000, data 0x1157f2b/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 1818624 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:16.327318+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89808896 unmapped: 1368064 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:17.327437+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7984 writes, 30K keys, 7984 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 7984 writes, 1865 syncs, 4.28 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2353 writes, 6787 keys, 2353 commit groups, 1.0 writes per commit group, ingest: 7.64 MB, 0.01 MB/s
                                           Interval WAL: 2353 writes, 1005 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:18.327592+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.250799179s of 10.555690765s, submitted: 96
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157dbe/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088671 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:19.327774+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:20.327942+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:21.328110+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157df1/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157df1/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:22.328241+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:23.328411+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087325 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:24.328568+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc ms_handle_reset ms_handle_reset con 0x557764265800
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: get_auth_request con 0x557765b96800 auth_method 0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_configure stats_period=5
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 2326528 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:25.328705+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:26.328839+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157d55/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157d55/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:27.328989+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:28.329118+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.960803032s of 10.004592896s, submitted: 14
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087841 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 2310144 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:29.329229+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157cb6/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:30.329464+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:31.329808+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:32.329974+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 2293760 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:33.330175+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089913 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 2285568 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:34.330402+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157db0/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:35.330591+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:36.330748+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:37.330930+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:38.331106+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c19/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087327 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:39.331241+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c19/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:40.331347+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.876619339s of 11.963118553s, submitted: 28
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157c4c/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:41.331471+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:42.331635+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:43.331758+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090125 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:44.331872+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:45.332009+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:46.332137+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157ce1/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:47.332278+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:48.332437+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x1157d0c/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089131 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:49.332609+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c46/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:50.332770+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:51.332893+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157b7f/0x1230000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:52.333147+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.725197792s of 12.809599876s, submitted: 25
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:53.333358+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089729 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:54.333546+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:55.333762+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x1157c1a/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:56.333915+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:57.334049+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:58.334166+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089729 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:59.334319+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x1157c1a/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:00.334450+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 2236416 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:01.334581+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 2236416 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:02.334727+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 2228224 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.085176468s of 10.108474731s, submitted: 6
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:03.334842+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092185 data_alloc: 218103808 data_used: 311296
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:04.334975+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x115969d/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:05.335174+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:06.335309+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x115969d/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:07.335434+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:08.335574+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090523 data_alloc: 218103808 data_used: 311296
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:09.335735+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:10.335904+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 2179072 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:11.336062+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88915968 unmapped: 2260992 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:12.336231+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 15
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa436000/0x0/0x4ffc00000, data 0x115cc1b/0x1236000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:13.336368+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.059730530s of 10.469105721s, submitted: 159
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100111 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:14.336501+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:15.336633+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:16.336864+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:17.337032+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa435000/0x0/0x4ffc00000, data 0x115ccb6/0x1237000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:18.337234+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100111 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:19.337354+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:20.337529+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fa433000/0x0/0x4ffc00000, data 0x115e719/0x123a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:21.337690+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:22.337894+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:23.338051+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fa432000/0x0/0x4ffc00000, data 0x115e7b4/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104005 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:24.338179+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.492309570s of 11.524030685s, submitted: 14
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:25.338315+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:26.338449+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:27.338583+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:28.338715+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x115e8c4/0x123d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111955 data_alloc: 218103808 data_used: 327680
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:29.338836+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:30.338994+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 2088960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:31.339148+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x11605e0/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89096192 unmapped: 2080768 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:32.339309+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:33.339429+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117181 data_alloc: 218103808 data_used: 335872
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:34.339623+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x1161ec8/0x1243000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:35.339902+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89120768 unmapped: 2056192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.267296791s of 10.416739464s, submitted: 51
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:36.340214+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89120768 unmapped: 2056192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:37.340405+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 2048000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:38.340546+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x1161e2d/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120383 data_alloc: 218103808 data_used: 344064
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:39.340685+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:40.340824+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:41.341024+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 2031616 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 155 ms_handle_reset con 0x557763f08000 session 0x55776350d0e0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:42.341218+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 598016 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0x1165511/0x1249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 16
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:43.341391+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124387 data_alloc: 218103808 data_used: 344064
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:44.341538+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:45.341756+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:46.341909+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.574859619s of 10.815853119s, submitted: 264
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:47.342077+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0x1167127/0x124c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:48.342207+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 157 handle_osd_map epochs [158,159], i have 157, src has [1,159]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137003 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:49.342333+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:50.342459+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:51.342641+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:52.342840+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c441/0x1256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:53.342966+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:54.343079+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139835 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:55.343215+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:56.343392+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.203613281s of 10.384685516s, submitted: 64
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c441/0x1256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:57.343509+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:58.343633+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:59.343776+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141625 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c4dc/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:00.344002+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 17
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:01.344228+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91611136 unmapped: 614400 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:02.344508+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91611136 unmapped: 614400 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557764264c00
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:03.344652+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:04.344801+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153121 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91652096 unmapped: 1622016 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:05.344942+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x116fc59/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:06.345119+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:07.345294+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.928189278s of 10.833756447s, submitted: 92
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:08.345456+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:09.345544+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146159 data_alloc: 218103808 data_used: 364544
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa416000/0x0/0x4ffc00000, data 0x116fa7c/0x1258000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:10.345654+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:11.345801+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:12.345967+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:13.346089+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:14.346230+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:15.346356+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:16.346539+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:17.346660+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:18.346788+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:19.347167+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:20.347320+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:21.347433+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:22.347599+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:23.347720+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:24.347882+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:25.348033+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:26.348135+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:27.348299+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:28.348405+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:29.348529+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:30.348665+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:31.348844+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:32.349043+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:33.349170+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:34.349364+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:35.349503+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:36.349664+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:37.349864+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:38.350010+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:39.350187+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:40.350331+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:41.350463+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:42.350612+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:43.350748+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:44.350893+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:45.351017+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:46.351156+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:47.351307+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:48.351441+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:49.351523+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:50.351665+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:51.351757+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:52.352412+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:53.352546+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:54.352791+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:55.352949+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:56.353505+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 1572864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:57.353693+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:58.353874+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:59.353985+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:00.354125+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:01.354459+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:02.354603+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:03.354720+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:04.354847+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:05.354976+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:06.355090+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 05:45:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882396314' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:07.355245+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:08.355338+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:09.355449+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:10.355587+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:11.355741+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.228721619s of 64.248054504s, submitted: 16
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 ms_handle_reset con 0x557764264c00 session 0x5577635ba1e0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:12.355927+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91930624 unmapped: 1343488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 18
Nov 29 05:45:30 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:13.356073+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:14.356206+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149453 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:15.356347+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:16.356661+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:17.356819+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:18.356936+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:19.357066+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149453 data_alloc: 218103808 data_used: 372736
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:20.357198+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:21.357363+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:22.357533+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x11715ba/0x125c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:23.357769+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.423639297s of 11.447608948s, submitted: 183
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:24.357906+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155219 data_alloc: 218103808 data_used: 380928
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:25.358063+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40e000/0x0/0x4ffc00000, data 0x11731a0/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:26.358313+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:27.358444+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:28.358576+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:29.358733+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153649 data_alloc: 218103808 data_used: 380928
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:30.358922+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:31.359093+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:32.359358+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:33.359588+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:34.359739+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153649 data_alloc: 218103808 data_used: 380928
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:35.359872+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.546059608s of 12.619788170s, submitted: 25
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:36.359996+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x11731a0/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:37.360215+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:38.360336+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:39.360450+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158853 data_alloc: 218103808 data_used: 389120
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:40.360553+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:41.360721+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:42.360898+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa40d000/0x0/0x4ffc00000, data 0x1174b68/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:43.361123+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:44.361328+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159961 data_alloc: 218103808 data_used: 397312
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:45.361458+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:46.361606+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:47.361884+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:48.362023+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:49.362148+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159961 data_alloc: 218103808 data_used: 397312
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:50.362334+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.282610893s of 14.913866997s, submitted: 51
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:51.362490+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:52.362687+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:53.362877+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:54.363085+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:55.363228+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:56.363394+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:57.363562+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:58.363686+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:59.363850+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:00.363992+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:01.364164+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:02.364437+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:03.364583+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:04.364719+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:05.364844+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:06.365017+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:07.365140+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:08.365276+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:09.365389+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:10.365508+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:11.365708+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:12.365908+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.777770996s of 21.886068344s, submitted: 15
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:13.366056+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:14.366171+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:15.366337+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:16.366518+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:17.366722+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:18.366956+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:19.367122+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:20.367361+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:21.367612+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:22.367917+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:23.368089+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:24.368236+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:25.368355+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:26.368470+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:27.368589+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:28.368706+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:29.368843+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.618280411s of 16.621215820s, submitted: 1
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163983 data_alloc: 218103808 data_used: 401408
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:30.369023+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:31.369174+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:32.369345+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:33.369627+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:34.369775+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163983 data_alloc: 218103808 data_used: 401408
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:35.369939+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:36.370106+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:37.370225+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:38.370401+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:39.370591+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:40.370794+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:41.370970+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:42.371160+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:43.371320+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:44.371517+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:45.371690+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:46.372013+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:47.374542+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:48.374712+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:49.374860+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:50.375017+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:51.375183+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:52.375365+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:53.375551+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92020736 unmapped: 1253376 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:54.375682+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92020736 unmapped: 1253376 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:55.375826+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 167 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.889047623s of 25.900033951s, submitted: 3
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:56.376008+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:57.376143+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:58.376301+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:59.376456+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166389 data_alloc: 218103808 data_used: 409600
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:00.376620+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:01.376781+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:02.376961+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:03.377080+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:04.377239+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169363 data_alloc: 218103808 data_used: 409600
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:05.377377+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:06.377531+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:07.377685+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:08.377907+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:09.378077+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169363 data_alloc: 218103808 data_used: 409600
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92045312 unmapped: 1228800 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:10.378323+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:11.378628+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:12.378811+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:13.378983+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:14.379110+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:15.379359+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:16.379501+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:17.379627+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:18.379743+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:19.379855+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:20.380152+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:21.380342+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:22.380567+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:23.380757+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:24.380916+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:25.381065+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:26.381421+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:27.381601+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:28.381737+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:29.381862+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:30.381988+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:31.382119+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:32.382310+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:33.382439+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:34.382569+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:35.382731+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:36.382855+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:37.382982+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:38.383136+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:39.383292+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:40.383456+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:41.383651+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:42.383840+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:43.384008+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:44.384139+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:45.384319+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:46.384459+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:47.384594+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:48.384772+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:49.384911+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:50.385104+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:51.385259+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:52.385465+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:53.385625+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:54.385761+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:30 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:30 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:55.385933+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:56.386066+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92053504 unmapped: 1220608 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:57.386224+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}'
Nov 29 05:45:30 compute-0 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 05:45:30 compute-0 ceph-osd[91343]: do_command 'config show' '{prefix=config show}'
Nov 29 05:45:30 compute-0 ceph-osd[91343]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 05:45:30 compute-0 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 05:45:30 compute-0 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 05:45:30 compute-0 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 05:45:30 compute-0 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 2285568 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:58.392096+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 2351104 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:45:30 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:59.392358+0000)
Nov 29 05:45:30 compute-0 ceph-osd[91343]: do_command 'log dump' '{prefix=log dump}'
Nov 29 05:45:30 compute-0 ceph-mon[75176]: from='client.14557 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:30 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3904735651' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 05:45:30 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/296138431' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 05:45:30 compute-0 ceph-mon[75176]: from='client.14563 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:30 compute-0 ceph-mon[75176]: from='client.14567 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:30 compute-0 ceph-mon[75176]: pgmap v1264: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:30 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1882396314' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 05:45:30 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14569 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 05:45:30 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2146028127' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 05:45:30 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:30 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14573 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 05:45:31 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1542233319' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 05:45:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14577 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:31 compute-0 ceph-mon[75176]: from='client.14569 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:31 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2146028127' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 05:45:31 compute-0 ceph-mon[75176]: from='client.14573 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:31 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1542233319' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 05:45:31 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 05:45:31 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147464233' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 05:45:31 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14581 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 05:45:32 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797536980' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:45:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14585 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 05:45:32 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836742134' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 05:45:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14589 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:32 compute-0 ceph-mon[75176]: from='client.14577 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:32 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/147464233' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 05:45:32 compute-0 ceph-mon[75176]: from='client.14581 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:32 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1797536980' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:45:32 compute-0 ceph-mon[75176]: from='client.14585 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:32 compute-0 ceph-mon[75176]: pgmap v1265: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:32 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3836742134' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 05:45:32 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14593 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 05:45:33 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031763189' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 05:45:33 compute-0 ceph-mon[75176]: from='client.14589 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:33 compute-0 ceph-mon[75176]: from='client.14593 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:33 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1031763189' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 05:45:33 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14599 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:33 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:45:33.781+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 05:45:33 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 05:45:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 05:45:34 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128209929' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 05:45:34 compute-0 crontab[277195]: (root) LIST (root)
Nov 29 05:45:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 05:45:34 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100858855' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 05:45:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 05:45:34 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1082708319' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 05:45:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 05:45:34 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606643643' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:11.687083+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:13:41.424252+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.9 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:13:41.438254+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.9 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76603392 unmapped: 933888 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 117) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:13:41.424252+0000 osd.1 (osd.1) 116 : cluster [DBG] 5.9 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:13:41.438254+0000 osd.1 (osd.1) 117 : cluster [DBG] 5.9 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:12.687338+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 813889 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 925696 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:13.687542+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:14.687754+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:15.688003+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 917504 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:16.688201+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:13:46.428946+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.d scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:13:46.442354+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.d scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 119) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:13:46.428946+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.d scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:13:46.442354+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.d scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:17.688413+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815036 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:18.688689+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 909312 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:19.688886+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:20.689043+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:21.689343+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 901120 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:22.689548+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815036 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76644352 unmapped: 892928 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:23.689727+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76652544 unmapped: 884736 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:24.689918+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:25.690116+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76660736 unmapped: 876544 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:26.690416+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 868352 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:27.690632+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815036 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76668928 unmapped: 868352 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.166151047s of 17.183015823s, submitted: 4
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:28.690783+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:13:58.607101+0000 osd.1 (osd.1) 120 : cluster [DBG] 10.6 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:13:58.621150+0000 osd.1 (osd.1) 121 : cluster [DBG] 10.6 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 860160 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 121) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:13:58.607101+0000 osd.1 (osd.1) 120 : cluster [DBG] 10.6 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:13:58.621150+0000 osd.1 (osd.1) 121 : cluster [DBG] 10.6 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:29.691035+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:13:59.601604+0000 osd.1 (osd.1) 122 : cluster [DBG] 10.11 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:13:59.615695+0000 osd.1 (osd.1) 123 : cluster [DBG] 10.11 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 123) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:13:59.601604+0000 osd.1 (osd.1) 122 : cluster [DBG] 10.11 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:13:59.615695+0000 osd.1 (osd.1) 123 : cluster [DBG] 10.11 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:30.691251+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 851968 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:31.691523+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 843776 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:32.691746+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817333 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 843776 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:33.691951+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:03.573112+0000 osd.1 (osd.1) 124 : cluster [DBG] 10.10 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:03.587208+0000 osd.1 (osd.1) 125 : cluster [DBG] 10.10 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 843776 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 125) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:03.573112+0000 osd.1 (osd.1) 124 : cluster [DBG] 10.10 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:03.587208+0000 osd.1 (osd.1) 125 : cluster [DBG] 10.10 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:34.692133+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 835584 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:35.692306+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:05.599759+0000 osd.1 (osd.1) 126 : cluster [DBG] 10.12 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:05.613843+0000 osd.1 (osd.1) 127 : cluster [DBG] 10.12 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 835584 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 127) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:05.599759+0000 osd.1 (osd.1) 126 : cluster [DBG] 10.12 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:05.613843+0000 osd.1 (osd.1) 127 : cluster [DBG] 10.12 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:36.692538+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 827392 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:37.692809+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 819631 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76709888 unmapped: 827392 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:38.692945+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 811008 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:39.693085+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 811008 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.017621994s of 12.047937393s, submitted: 8
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:40.693241+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:10.655073+0000 osd.1 (osd.1) 128 : cluster [DBG] 5.1d scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:10.669191+0000 osd.1 (osd.1) 129 : cluster [DBG] 5.1d scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76726272 unmapped: 811008 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 129) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:10.655073+0000 osd.1 (osd.1) 128 : cluster [DBG] 5.1d scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:10.669191+0000 osd.1 (osd.1) 129 : cluster [DBG] 5.1d scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:41.693524+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 802816 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:42.693683+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820779 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76734464 unmapped: 802816 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:43.693845+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 794624 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:44.693987+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:13.720579+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.c scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:13.734821+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.c scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 131) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:13.720579+0000 osd.1 (osd.1) 130 : cluster [DBG] 5.c scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:13.734821+0000 osd.1 (osd.1) 131 : cluster [DBG] 5.c scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 794624 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:45.694189+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:14.720518+0000 osd.1 (osd.1) 132 : cluster [DBG] 2.7 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:14.734763+0000 osd.1 (osd.1) 133 : cluster [DBG] 2.7 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 133) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:14.720518+0000 osd.1 (osd.1) 132 : cluster [DBG] 2.7 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:14.734763+0000 osd.1 (osd.1) 133 : cluster [DBG] 2.7 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 794624 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:46.694425+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 786432 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:47.694694+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 823073 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 786432 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:48.694829+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:18.617372+0000 osd.1 (osd.1) 134 : cluster [DBG] 4.12 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:18.631413+0000 osd.1 (osd.1) 135 : cluster [DBG] 4.12 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 135) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:18.617372+0000 osd.1 (osd.1) 134 : cluster [DBG] 4.12 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:18.631413+0000 osd.1 (osd.1) 135 : cluster [DBG] 4.12 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:49.695000+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:50.695133+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:20.580084+0000 osd.1 (osd.1) 136 : cluster [DBG] 10.f scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:20.594185+0000 osd.1 (osd.1) 137 : cluster [DBG] 10.f scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 137) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:20.580084+0000 osd.1 (osd.1) 136 : cluster [DBG] 10.f scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:20.594185+0000 osd.1 (osd.1) 137 : cluster [DBG] 10.f scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:51.695771+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:52.696153+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 825369 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:53.696374+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.896008492s of 13.930132866s, submitted: 10
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:54.696865+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:24.585151+0000 osd.1 (osd.1) 138 : cluster [DBG] 4.14 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:24.599304+0000 osd.1 (osd.1) 139 : cluster [DBG] 4.14 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 139) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:24.585151+0000 osd.1 (osd.1) 138 : cluster [DBG] 4.14 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:24.599304+0000 osd.1 (osd.1) 139 : cluster [DBG] 4.14 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:55.697370+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:56.697698+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:57.697898+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 826517 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 737280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:58.698069+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:59.698294+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 770048 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:00.698420+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:01.698562+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 1 last_log 140 sent 139 num 1 unsent 1 sending 1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:31.685360+0000 osd.1 (osd.1) 140 : cluster [DBG] 4.9 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 140) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:31.685360+0000 osd.1 (osd.1) 140 : cluster [DBG] 4.9 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 761856 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:02.698916+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 1 last_log 141 sent 140 num 1 unsent 1 sending 1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:31.699454+0000 osd.1 (osd.1) 141 : cluster [DBG] 4.9 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 141) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:31.699454+0000 osd.1 (osd.1) 141 : cluster [DBG] 4.9 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827664 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76783616 unmapped: 753664 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:03.699059+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:33.681610+0000 osd.1 (osd.1) 142 : cluster [DBG] 5.f scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:33.695728+0000 osd.1 (osd.1) 143 : cluster [DBG] 5.f scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 143) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:33.681610+0000 osd.1 (osd.1) 142 : cluster [DBG] 5.f scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:33.695728+0000 osd.1 (osd.1) 143 : cluster [DBG] 5.f scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76791808 unmapped: 745472 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:04.699217+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 737280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:05.699440+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 737280 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:06.699613+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 729088 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:07.699744+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828811 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 720896 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:08.699885+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76816384 unmapped: 720896 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:09.700097+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 712704 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:10.700252+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 712704 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:11.700548+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76824576 unmapped: 712704 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:12.700887+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828811 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 704512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:13.701363+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 704512 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:14.701504+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.928232193s of 20.949409485s, submitted: 6
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 696320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:15.701619+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:45.534546+0000 osd.1 (osd.1) 144 : cluster [DBG] 10.b scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:45.548652+0000 osd.1 (osd.1) 145 : cluster [DBG] 10.b scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 696320 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 145) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:45.534546+0000 osd.1 (osd.1) 144 : cluster [DBG] 10.b scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:45.548652+0000 osd.1 (osd.1) 145 : cluster [DBG] 10.b scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:16.701912+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:46.512780+0000 osd.1 (osd.1) 146 : cluster [DBG] 4.10 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:46.526906+0000 osd.1 (osd.1) 147 : cluster [DBG] 4.10 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 688128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 147) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:46.512780+0000 osd.1 (osd.1) 146 : cluster [DBG] 4.10 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:46.526906+0000 osd.1 (osd.1) 147 : cluster [DBG] 4.10 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:17.702197+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832254 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76849152 unmapped: 688128 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:18.702433+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:48.537283+0000 osd.1 (osd.1) 148 : cluster [DBG] 2.5 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:48.551442+0000 osd.1 (osd.1) 149 : cluster [DBG] 2.5 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 663552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 149) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:48.537283+0000 osd.1 (osd.1) 148 : cluster [DBG] 2.5 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:48.551442+0000 osd.1 (osd.1) 149 : cluster [DBG] 2.5 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:19.702720+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 663552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:20.702915+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76873728 unmapped: 663552 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:21.703101+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 655360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:22.703337+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 832254 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76881920 unmapped: 655360 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:23.703543+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.d deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.d deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 647168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:24.703756+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:54.605190+0000 osd.1 (osd.1) 150 : cluster [DBG] 4.d deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:54.619360+0000 osd.1 (osd.1) 151 : cluster [DBG] 4.d deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 151) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:54.605190+0000 osd.1 (osd.1) 150 : cluster [DBG] 4.d deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:54.619360+0000 osd.1 (osd.1) 151 : cluster [DBG] 4.d deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.023162842s of 10.054508209s, submitted: 8
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 647168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:25.704120+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:55.589231+0000 osd.1 (osd.1) 152 : cluster [DBG] 6.1 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:55.603310+0000 osd.1 (osd.1) 153 : cluster [DBG] 6.1 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 153) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:55.589231+0000 osd.1 (osd.1) 152 : cluster [DBG] 6.1 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:55.603310+0000 osd.1 (osd.1) 153 : cluster [DBG] 6.1 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76890112 unmapped: 647168 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:26.704417+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:56.608462+0000 osd.1 (osd.1) 154 : cluster [DBG] 2.4 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:56.622567+0000 osd.1 (osd.1) 155 : cluster [DBG] 2.4 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 155) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:56.608462+0000 osd.1 (osd.1) 154 : cluster [DBG] 2.4 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:56.622567+0000 osd.1 (osd.1) 155 : cluster [DBG] 2.4 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:27.704661+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:57.653300+0000 osd.1 (osd.1) 156 : cluster [DBG] 2.6 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:57.667368+0000 osd.1 (osd.1) 157 : cluster [DBG] 2.6 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 157) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:57.653300+0000 osd.1 (osd.1) 156 : cluster [DBG] 2.6 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:57.667368+0000 osd.1 (osd.1) 157 : cluster [DBG] 2.6 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836842 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:28.704822+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:58.652134+0000 osd.1 (osd.1) 158 : cluster [DBG] 4.f scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:14:58.666353+0000 osd.1 (osd.1) 159 : cluster [DBG] 4.f scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 159) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:58.652134+0000 osd.1 (osd.1) 158 : cluster [DBG] 4.f scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:14:58.666353+0000 osd.1 (osd.1) 159 : cluster [DBG] 4.f scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76906496 unmapped: 630784 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:29.705009+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 622592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:30.705204+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 622592 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:31.705314+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:01.548255+0000 osd.1 (osd.1) 160 : cluster [DBG] 10.2 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:01.562317+0000 osd.1 (osd.1) 161 : cluster [DBG] 10.2 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 161) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:01.548255+0000 osd.1 (osd.1) 160 : cluster [DBG] 10.2 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:01.562317+0000 osd.1 (osd.1) 161 : cluster [DBG] 10.2 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76922880 unmapped: 614400 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:32.705516+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:02.502368+0000 osd.1 (osd.1) 162 : cluster [DBG] 10.14 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:02.519900+0000 osd.1 (osd.1) 163 : cluster [DBG] 10.14 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 163) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:02.502368+0000 osd.1 (osd.1) 162 : cluster [DBG] 10.14 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:02.519900+0000 osd.1 (osd.1) 163 : cluster [DBG] 10.14 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841433 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76931072 unmapped: 606208 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:33.705764+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:03.508000+0000 osd.1 (osd.1) 164 : cluster [DBG] 2.9 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:03.522079+0000 osd.1 (osd.1) 165 : cluster [DBG] 2.9 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 165) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:03.508000+0000 osd.1 (osd.1) 164 : cluster [DBG] 2.9 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:03.522079+0000 osd.1 (osd.1) 165 : cluster [DBG] 2.9 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 598016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:34.706022+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 598016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:35.706198+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.778158188s of 10.833313942s, submitted: 14
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76939264 unmapped: 598016 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:36.706464+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:06.422585+0000 osd.1 (osd.1) 166 : cluster [DBG] 2.1b scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:06.436628+0000 osd.1 (osd.1) 167 : cluster [DBG] 2.1b scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 167) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:06.422585+0000 osd.1 (osd.1) 166 : cluster [DBG] 2.1b scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:06.436628+0000 osd.1 (osd.1) 167 : cluster [DBG] 2.1b scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:37.706715+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:07.407944+0000 osd.1 (osd.1) 168 : cluster [DBG] 5.18 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:07.422144+0000 osd.1 (osd.1) 169 : cluster [DBG] 5.18 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 581632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 169) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:07.407944+0000 osd.1 (osd.1) 168 : cluster [DBG] 5.18 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:07.422144+0000 osd.1 (osd.1) 169 : cluster [DBG] 5.18 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844878 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:38.706965+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:08.425209+0000 osd.1 (osd.1) 170 : cluster [DBG] 10.13 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:08.438843+0000 osd.1 (osd.1) 171 : cluster [DBG] 10.13 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76955648 unmapped: 581632 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 171) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:08.425209+0000 osd.1 (osd.1) 170 : cluster [DBG] 10.13 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:08.438843+0000 osd.1 (osd.1) 171 : cluster [DBG] 10.13 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:39.707204+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 573440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:40.707381+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76963840 unmapped: 573440 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:41.707514+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:11.344105+0000 osd.1 (osd.1) 172 : cluster [DBG] 2.a scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:11.358335+0000 osd.1 (osd.1) 173 : cluster [DBG] 2.a scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 565248 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 173) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:11.344105+0000 osd.1 (osd.1) 172 : cluster [DBG] 2.a scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:11.358335+0000 osd.1 (osd.1) 173 : cluster [DBG] 2.a scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:42.707715+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 565248 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847173 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:43.707886+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:13.336282+0000 osd.1 (osd.1) 174 : cluster [DBG] 5.19 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:13.350281+0000 osd.1 (osd.1) 175 : cluster [DBG] 5.19 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76972032 unmapped: 565248 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 175) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:13.336282+0000 osd.1 (osd.1) 174 : cluster [DBG] 5.19 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:13.350281+0000 osd.1 (osd.1) 175 : cluster [DBG] 5.19 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:44.708115+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 557056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:45.708255+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76980224 unmapped: 557056 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:46.708451+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 548864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.948025703s of 10.981702805s, submitted: 10
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:47.708644+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:17.404335+0000 osd.1 (osd.1) 176 : cluster [DBG] 5.1a scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:17.418507+0000 osd.1 (osd.1) 177 : cluster [DBG] 5.1a scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 76988416 unmapped: 548864 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 177) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:17.404335+0000 osd.1 (osd.1) 176 : cluster [DBG] 5.1a scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:17.418507+0000 osd.1 (osd.1) 177 : cluster [DBG] 5.1a scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849468 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:48.708850+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:18.355565+0000 osd.1 (osd.1) 178 : cluster [DBG] 6.6 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:18.373172+0000 osd.1 (osd.1) 179 : cluster [DBG] 6.6 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 532480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 179) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:18.355565+0000 osd.1 (osd.1) 178 : cluster [DBG] 6.6 scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:18.373172+0000 osd.1 (osd.1) 179 : cluster [DBG] 6.6 scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:49.709066+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:19.336672+0000 osd.1 (osd.1) 180 : cluster [DBG] 6.e scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:19.354178+0000 osd.1 (osd.1) 181 : cluster [DBG] 6.e scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 532480 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 181) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:19.336672+0000 osd.1 (osd.1) 180 : cluster [DBG] 6.e scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:19.354178+0000 osd.1 (osd.1) 181 : cluster [DBG] 6.e scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:50.709307+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 524288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:51.709500+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 524288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:52.709632+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77012992 unmapped: 524288 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850615 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:53.709785+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:54.709960+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:55.710140+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 516096 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:56.710449+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77029376 unmapped: 507904 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:57.710768+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 499712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:58.710929+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 850615 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 499712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.895287514s of 11.917461395s, submitted: 6
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:59.711055+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:29.321751+0000 osd.1 (osd.1) 182 : cluster [DBG] 6.2 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:29.335865+0000 osd.1 (osd.1) 183 : cluster [DBG] 6.2 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77037568 unmapped: 499712 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 183) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:29.321751+0000 osd.1 (osd.1) 182 : cluster [DBG] 6.2 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:29.335865+0000 osd.1 (osd.1) 183 : cluster [DBG] 6.2 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:00.711350+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 491520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:01.711585+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77045760 unmapped: 491520 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:02.711795+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:32.349046+0000 osd.1 (osd.1) 184 : cluster [DBG] 6.c deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:32.366700+0000 osd.1 (osd.1) 185 : cluster [DBG] 6.c deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 185) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:32.349046+0000 osd.1 (osd.1) 184 : cluster [DBG] 6.c deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:32.366700+0000 osd.1 (osd.1) 185 : cluster [DBG] 6.c deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:03.712077+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852909 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:04.712299+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 483328 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:05.712521+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:35.332938+0000 osd.1 (osd.1) 186 : cluster [DBG] 6.4 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:35.361179+0000 osd.1 (osd.1) 187 : cluster [DBG] 6.4 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 187) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:35.332938+0000 osd.1 (osd.1) 186 : cluster [DBG] 6.4 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:35.361179+0000 osd.1 (osd.1) 187 : cluster [DBG] 6.4 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:06.712738+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:36.382641+0000 osd.1 (osd.1) 188 : cluster [DBG] 6.b scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:36.400318+0000 osd.1 (osd.1) 189 : cluster [DBG] 6.b scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77070336 unmapped: 466944 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 189) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:36.382641+0000 osd.1 (osd.1) 188 : cluster [DBG] 6.b scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:36.400318+0000 osd.1 (osd.1) 189 : cluster [DBG] 6.b scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:07.712989+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:37.414575+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.d scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:37.435498+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.d scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 191) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:37.414575+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.d scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:37.435498+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.d scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:08.713345+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856350 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 450560 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:09.713503+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:10.713647+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77103104 unmapped: 434176 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.140506744s of 12.177715302s, submitted: 10
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:11.713879+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:41.499550+0000 osd.1 (osd.1) 192 : cluster [DBG] 9.15 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:41.531329+0000 osd.1 (osd.1) 193 : cluster [DBG] 9.15 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77111296 unmapped: 425984 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 193) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:41.499550+0000 osd.1 (osd.1) 192 : cluster [DBG] 9.15 deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:41.531329+0000 osd.1 (osd.1) 193 : cluster [DBG] 9.15 deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:12.714062+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:13.714373+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857498 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77119488 unmapped: 417792 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:14.714523+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:44.483808+0000 osd.1 (osd.1) 194 : cluster [DBG] 9.1f deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  will send 2025-11-29T05:15:44.519119+0000 osd.1 (osd.1) 195 : cluster [DBG] 9.1f deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client handle_log_ack log(last 195) v1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:44.483808+0000 osd.1 (osd.1) 194 : cluster [DBG] 9.1f deep-scrub starts
Nov 29 05:45:34 compute-0 ceph-osd[90181]: log_client  logged 2025-11-29T05:15:44.519119+0000 osd.1 (osd.1) 195 : cluster [DBG] 9.1f deep-scrub ok
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:15.714710+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 385024 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:16.714930+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:17.715060+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:18.715181+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 376832 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:19.715450+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:20.715657+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 368640 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:21.715784+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:22.715925+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 360448 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:23.716094+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:24.716228+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:25.716415+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77185024 unmapped: 352256 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:26.716586+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:27.716806+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 344064 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:28.716945+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:29.717091+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 335872 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:30.717233+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:31.717385+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 327680 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:32.717565+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:33.717724+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:34.718073+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77217792 unmapped: 319488 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:35.718198+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:36.718386+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 311296 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:37.718530+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:38.718710+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:39.718872+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 303104 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:40.719051+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:41.719162+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 294912 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:42.719326+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:43.719520+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:44.719723+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 286720 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:45.719949+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 278528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:46.720187+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 278528 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:47.720339+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:48.720452+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 270336 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:49.720612+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 262144 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:50.720791+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 253952 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:51.720922+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:52.721074+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:53.721238+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 245760 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:54.751411+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:55.751681+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 237568 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:56.751876+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:57.752093+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:58.752241+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 229376 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:59.752446+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:00.752592+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 221184 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:01.752868+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:02.753054+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:03.753328+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 212992 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:04.753640+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:05.753842+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 204800 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:06.753994+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:07.754140+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 196608 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:08.754356+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:09.754549+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 188416 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:10.754707+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 180224 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:11.754968+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:12.755342+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 172032 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:13.755542+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:14.755702+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77373440 unmapped: 163840 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:15.755882+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:16.756087+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:17.756294+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 155648 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:18.756407+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:19.756598+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 147456 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:20.756804+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:21.757035+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:22.757324+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 139264 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:23.757474+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:24.757604+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 131072 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:25.757773+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:26.757926+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 122880 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:27.758112+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 114688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:28.758560+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 114688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:29.758680+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 114688 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:30.758981+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:31.759646+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77430784 unmapped: 106496 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:32.759954+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 98304 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:33.760091+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 98304 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:34.760254+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:35.760409+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:36.760672+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77447168 unmapped: 90112 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:37.760923+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:38.761115+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77455360 unmapped: 81920 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:39.761315+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:40.761479+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:41.761687+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 73728 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:42.761858+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:43.761989+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 65536 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:44.762135+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:45.762699+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 57344 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:46.762909+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:47.763060+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:48.763449+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 49152 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:49.763568+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:50.763772+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 40960 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:51.764093+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 32768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:52.764238+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 32768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:53.764334+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 32768 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:54.764546+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:55.764691+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 24576 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:56.764878+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:57.764999+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 16384 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:58.765155+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 8192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:59.765346+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 8192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:00.765588+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 8192 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:01.765716+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:02.765873+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 0 heap: 77537280 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:03.766005+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:04.766185+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1040384 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:05.766333+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 1032192 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:06.766534+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 1032192 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:07.766686+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1024000 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:08.766866+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1024000 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:09.767088+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 1024000 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:10.767289+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1015808 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:11.767425+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 1015808 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:12.767633+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1007616 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:13.767924+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1007616 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:14.768257+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 1007616 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:15.768613+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:16.769063+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 999424 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:17.769472+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:18.769704+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:19.769952+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 991232 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:20.770072+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:21.770247+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 983040 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:22.770481+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:23.770627+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:24.770781+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 974848 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:25.771005+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:26.771206+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 966656 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:27.771435+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:28.771572+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 958464 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:29.771750+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:30.771920+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 950272 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:31.772048+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:32.772177+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:33.772331+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 942080 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:34.773596+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:35.773789+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 933888 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:36.774015+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:37.774225+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 925696 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:38.774375+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:39.774698+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:40.774832+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 917504 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:41.774990+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:42.775200+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 909312 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:43.775363+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:44.775521+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 901120 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:45.775833+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:46.776094+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:47.776361+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 892928 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:48.776512+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 884736 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:49.776736+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 884736 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:50.776883+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 876544 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:51.777147+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 876544 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:52.777379+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 868352 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:53.777603+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 868352 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:54.777786+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77717504 unmapped: 868352 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:55.777920+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 860160 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:56.778059+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 860160 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:57.778227+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 851968 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:58.778336+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 851968 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:59.778487+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 851968 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:00.778614+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:01.778789+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 843776 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:02.778919+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:03.779038+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:04.779240+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 835584 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:05.779394+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:06.779544+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77758464 unmapped: 827392 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:07.779653+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 819200 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:08.779837+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77766656 unmapped: 819200 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:09.780001+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 811008 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:10.780101+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 811008 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:11.780238+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77774848 unmapped: 811008 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:12.780329+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 802816 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:13.780463+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 802816 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:14.780597+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 794624 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:15.780749+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 794624 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:16.780895+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 786432 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:17.781054+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 786432 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:18.781217+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 786432 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:19.781316+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 778240 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:20.781464+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 778240 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:21.781583+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 770048 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:22.781711+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 770048 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:23.781857+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 761856 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:24.782023+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 761856 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:25.782158+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 761856 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:26.782334+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 753664 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:27.782474+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 753664 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:28.782654+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 745472 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:29.782774+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 745472 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:30.782912+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 745472 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:31.783029+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 737280 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:32.783166+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77848576 unmapped: 737280 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:33.783256+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 729088 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:34.783410+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77856768 unmapped: 729088 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:35.783571+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 720896 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:36.783742+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77864960 unmapped: 720896 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:37.783921+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 712704 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:38.784145+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 712704 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:39.784306+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 712704 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:40.784549+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 704512 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:41.784715+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77881344 unmapped: 704512 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:42.784904+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 696320 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:43.785116+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 696320 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:44.785277+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 696320 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:45.785460+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 688128 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:46.785638+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:47.785769+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 688128 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:48.785888+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 679936 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:49.786050+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77905920 unmapped: 679936 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:50.786203+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 671744 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:51.786324+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 671744 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:52.786550+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77914112 unmapped: 671744 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:53.786701+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 663552 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:54.786954+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 663552 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:55.787106+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 655360 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:56.787319+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77930496 unmapped: 655360 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:57.787585+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 647168 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:58.787749+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 647168 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:59.787906+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77938688 unmapped: 647168 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:00.788026+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 638976 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:01.788314+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 638976 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:02.788491+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 630784 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:03.788707+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 630784 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:04.788873+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 622592 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:05.789192+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 622592 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:06.789347+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77963264 unmapped: 622592 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:07.789546+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 614400 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:08.789714+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 614400 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:09.789871+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 606208 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:10.790011+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 606208 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:11.790174+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 606208 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 6875 writes, 28K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6875 writes, 1210 syncs, 5.68 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6875 writes, 28K keys, 6875 commit groups, 1.0 writes per commit group, ingest: 19.64 MB, 0.03 MB/s
                                           Interval WAL: 6875 writes, 1210 syncs, 5.68 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:12.790347+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 532480 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:13.790490+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 532480 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:14.790621+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 524288 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:15.790740+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 524288 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:16.790908+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 524288 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:17.791048+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 516096 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:18.791189+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 516096 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:19.791390+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 507904 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:20.791525+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 507904 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:21.791644+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 499712 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:22.791811+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 499712 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:23.791933+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 499712 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:24.792104+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 491520 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:25.792324+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 491520 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:26.792469+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 491520 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:27.792686+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 483328 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:28.792845+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 483328 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:29.793014+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 475136 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:30.793127+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 475136 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:31.793255+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 466944 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:32.793394+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 466944 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:33.793511+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 466944 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:34.793648+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 458752 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:35.793764+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 458752 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:36.793929+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 450560 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:37.794075+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 450560 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:38.794206+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 450560 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:39.794339+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 442368 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:40.794517+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 442368 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:41.795237+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:42.795623+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:43.795922+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:44.796079+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 425984 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:45.796321+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 425984 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:46.796548+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 417792 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:47.796703+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 417792 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:48.796841+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 409600 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:49.797000+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 409600 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:50.797181+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 409600 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:51.797337+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 401408 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:52.797583+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 401408 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:53.797699+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 393216 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:54.797935+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 385024 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:55.798259+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:56.798505+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:57.798857+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:58.799066+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 368640 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:59.799194+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 368640 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:00.799329+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:01.799445+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:02.799569+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:03.799768+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:04.799906+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:05.800198+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:06.800433+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:07.800563+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:08.800797+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:09.800985+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 298.764495850s of 298.778137207s, submitted: 4
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:10.801126+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78028800 unmapped: 557056 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:11.801332+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 548864 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:12.801477+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 548864 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:13.801623+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 548864 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:14.801780+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 548864 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:15.801911+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 548864 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:16.802065+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 548864 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:17.802191+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 548864 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:18.802412+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 540672 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:19.802598+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 540672 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:20.802733+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 540672 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:21.802852+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 532480 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:22.802975+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 532480 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:23.803097+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 524288 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:24.803308+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 524288 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:25.803449+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 524288 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:26.803597+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 516096 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:27.803839+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 516096 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:28.803977+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 507904 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:29.804144+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 507904 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:30.804351+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 499712 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:31.804491+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 499712 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:32.804669+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 499712 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:33.804874+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 491520 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:34.805078+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 491520 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:35.805309+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 483328 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:36.805552+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 483328 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:37.805745+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 483328 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:38.805959+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 475136 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:39.806121+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 475136 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:40.806360+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 466944 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:41.806472+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 466944 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:42.806621+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 458752 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:43.806814+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 458752 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:44.806965+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 458752 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:45.807144+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 450560 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:46.807320+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 450560 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:47.807454+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 442368 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:48.807628+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 442368 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:49.807807+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:50.808007+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:51.808237+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:52.808533+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 425984 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:53.808716+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 425984 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:54.808926+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 417792 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:55.809141+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 417792 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:56.809376+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 417792 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:57.809568+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 409600 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:58.809751+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 409600 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:59.809891+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 401408 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:00.810125+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 401408 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:01.810299+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 393216 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:02.810499+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 393216 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:03.810658+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 393216 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:04.810866+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 385024 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:05.811066+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 385024 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:06.811238+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:07.811331+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:08.811445+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:09.811597+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 368640 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:10.811724+0000)
Nov 29 05:45:34 compute-0 ceph-mon[75176]: from='client.14599 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:34 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/128209929' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 05:45:34 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3100858855' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 05:45:34 compute-0 ceph-mon[75176]: pgmap v1266: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:34 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1082708319' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 05:45:34 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2606643643' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 368640 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:11.811852+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:12.811984+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:13.812115+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:14.812234+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:15.812323+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:16.812482+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:17.812680+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:18.812834+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:19.812994+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:20.813125+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:21.813256+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:22.813408+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:23.813541+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:24.813670+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:25.813855+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:26.814032+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:27.814179+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:28.814333+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:29.814466+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:30.814627+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:31.814748+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:32.814876+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:33.815032+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:34.815155+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:35.815317+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:36.815460+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:37.815590+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:38.815710+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:39.815842+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:40.816050+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:41.816234+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:42.816361+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:43.816477+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:44.816625+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:45.816743+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:46.816925+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:47.817075+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:48.817253+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:49.817414+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:50.817607+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:51.817798+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:52.817955+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:53.818095+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:54.818219+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:55.818446+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:56.818675+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:57.818860+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:58.819015+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:59.819186+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:00.819387+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:01.819540+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:02.819657+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:03.819831+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:04.819954+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:05.820355+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:06.820543+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:07.820684+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:08.820815+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:09.820944+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:10.821067+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:11.821209+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:12.821382+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:13.821526+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:14.821644+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:15.821770+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:16.821923+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:17.822054+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:18.822212+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:19.822410+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:20.822534+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:21.822671+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:22.823784+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:23.823938+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:24.824221+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:25.824390+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:26.824961+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:27.825410+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:28.825729+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:29.825918+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:30.826119+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:31.826315+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:32.826435+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:33.826564+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:34.826741+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:35.826930+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:36.827113+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:37.827253+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:38.827406+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:39.827568+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:40.827707+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:41.827836+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:42.827946+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:43.828059+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:44.828193+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:45.828397+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:46.828564+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:47.828879+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:48.829074+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:49.829244+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:50.829339+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:51.829603+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:52.829767+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:53.830006+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:54.830186+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:55.830362+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:56.830614+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:57.830763+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:58.830939+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:59.831183+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:00.831368+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:01.831547+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:02.831701+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:03.831873+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:04.832007+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:05.832149+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:06.832313+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:07.832447+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:08.832671+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:09.832823+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:10.832983+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:11.833195+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:12.833347+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:13.833476+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:14.833605+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:15.833724+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:16.833947+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:17.834080+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:18.834318+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:19.834534+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:20.834712+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:21.834902+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:22.835028+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:23.835171+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:24.835337+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:25.835505+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 303104 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:26.835670+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:27.835820+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:28.835969+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:29.836100+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:30.836243+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:31.836382+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:32.836656+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:33.836870+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:34.837047+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:35.837199+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:36.837418+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:37.837576+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:38.837736+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:39.837854+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:40.838067+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:41.838233+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:42.838415+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:43.838601+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:44.838794+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:45.838965+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:46.839116+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:47.839251+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:48.839419+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:49.839577+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:50.839704+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:51.839831+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:52.839963+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:53.840117+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:54.840336+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:55.840534+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:56.840711+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:57.840871+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:58.841023+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:59.841181+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:00.841356+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:01.841527+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:02.841680+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:03.841840+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:04.841998+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:05.842215+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:06.842494+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:07.842696+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:08.842821+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:09.842948+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:10.843098+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:11.843287+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:12.843459+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:13.843707+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:14.843875+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:15.844045+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:16.844203+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:17.844363+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:18.844518+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc ms_handle_reset ms_handle_reset con 0x55909679fc00
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: get_auth_request con 0x559097d03c00 auth_method 0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_configure stats_period=5
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 ms_handle_reset con 0x559097d03400 session 0x5590967283c0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x55909a3ba400
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 ms_handle_reset con 0x5590971ab800 session 0x559097306780
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x55909a3b9000
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:19.844730+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 0 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:20.844973+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 0 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:21.845171+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 0 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:22.845368+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:23.845648+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:24.845837+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:25.846143+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:26.846400+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:27.846571+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:28.846759+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:29.846979+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:30.847252+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:31.847552+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:32.847836+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:33.848070+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:34.848328+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:35.848481+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:36.848653+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:37.848781+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:38.848944+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:39.849132+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:40.849280+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:41.849481+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:42.849653+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:43.850016+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:44.850375+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:45.850563+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:46.850733+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:47.850884+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:48.851032+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:49.851215+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:50.851377+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:51.851548+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:52.851714+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:53.851900+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:54.852052+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:55.852172+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:56.852332+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:57.852461+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:58.852597+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:59.852764+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:00.852884+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:01.852999+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:02.853109+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:03.853344+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:04.853491+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:05.853664+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:06.853905+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:07.854024+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:08.854176+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:09.854372+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:10.854508+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:11.854620+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:12.854802+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:13.854929+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:14.855113+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:15.855249+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:16.855451+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:17.855600+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:18.855762+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:19.855913+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:20.856093+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 999424 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:21.856250+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 999424 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:22.856468+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 999424 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:23.856623+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:24.856775+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:25.856942+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:26.857134+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:27.857320+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:28.857448+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:29.857631+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:30.857782+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:31.858054+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:32.858256+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:33.858461+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:34.858577+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:35.858757+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:36.858943+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:37.859105+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:38.859296+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:39.859488+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:40.859669+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:41.859879+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:42.860139+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:43.860384+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:44.860583+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:45.860746+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:46.860910+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:47.861058+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:48.861244+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:49.861412+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:50.861528+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:51.861719+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:52.861853+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:53.862014+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:54.862174+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:55.862324+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:56.862475+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:57.862633+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:58.862784+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:59.862983+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:00.863132+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:01.863259+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:02.863379+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:03.863554+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:04.863696+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:05.863821+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:06.864015+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:07.864171+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:08.864374+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:09.864530+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:10.864670+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:11.864799+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:12.864930+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:13.865122+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:14.865289+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:15.865470+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:16.865647+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:17.865821+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:18.865964+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:19.866182+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:20.866336+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:21.866478+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:22.866600+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:23.866769+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:24.866918+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:25.867066+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:26.867333+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:27.867450+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:28.867594+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:29.867713+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:30.867960+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:31.868143+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:32.868340+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:33.868468+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:34.868607+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:35.868795+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:36.869021+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:37.869170+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:38.869340+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:39.869504+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:40.869679+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:41.869899+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:42.870078+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:43.870243+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:44.870416+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:45.870598+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:46.870798+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:47.871011+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:48.871164+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:49.871350+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:50.871507+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:51.871659+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 rsyslogd[1003]: imjournal from <np0005539482:ceph-osd>: begin to drop messages due to rate-limiting
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:52.871827+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:53.872013+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:54.872160+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:55.872306+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:56.872484+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:57.872672+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:58.872871+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:59.873056+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:00.873318+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:01.873516+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:02.873664+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:03.873822+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:04.874096+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:05.874294+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:06.874475+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:07.874648+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:08.874792+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:09.874928+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:10.875117+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:11.875234+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:12.875351+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:13.875461+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:14.875582+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:15.875716+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:16.875896+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:17.876054+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:18.876200+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:19.876349+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:20.876479+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:21.876706+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:22.876919+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:23.877117+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:24.906695+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:25.906866+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:26.907097+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:27.907275+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:28.907435+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:29.907606+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:30.907804+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:31.907986+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:32.908094+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:33.908238+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:34.908390+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:35.908544+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:36.908719+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:37.908891+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:38.909093+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:39.909302+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:40.909454+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:41.909575+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:42.909719+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:43.909905+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:44.910070+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:45.910291+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:46.910480+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:47.910645+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:48.910830+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:49.910941+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:50.911090+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:51.911230+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:52.911410+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:53.911582+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:54.911727+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:55.911854+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:56.912000+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:57.912207+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:58.912396+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:59.912542+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:00.912765+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:01.912993+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:02.913206+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:03.913436+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:04.913621+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:05.913775+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:06.913934+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:07.914123+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:08.914246+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:09.914438+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 925696 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:10.914576+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 925696 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:11.914724+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:12.914965+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:13.915465+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:14.915657+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:15.915839+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:16.916010+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:17.916156+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:18.916314+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:19.916485+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:20.916604+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:21.916787+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:22.917062+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:23.917237+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:24.917396+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:25.917553+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:26.918131+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:27.918347+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:28.918512+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:29.918695+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:30.918834+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:31.918959+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:32.919137+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:33.919336+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:34.919548+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:35.919740+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:36.919993+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:37.920191+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:38.920363+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:39.920583+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:40.920765+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:41.920911+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:42.921036+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:43.921218+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:44.921330+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:45.921475+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:46.921647+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:47.921802+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:48.921962+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:49.922159+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:50.922410+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:51.922556+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:52.922721+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:53.922904+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:54.923054+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:55.923181+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:56.923332+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:57.923444+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:58.923582+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:59.923701+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:00.923811+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:01.923955+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:02.924094+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:03.924462+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:04.924627+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:05.924790+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:06.924974+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:07.925163+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:08.925374+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:09.925605+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:10.925789+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:11.925920+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 7055 writes, 29K keys, 7055 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7055 writes, 1300 syncs, 5.43 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 278 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:12.926061+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:13.926236+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:14.926414+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:15.926538+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:16.926726+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:17.926878+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:18.927031+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:19.927188+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:20.927338+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:21.927467+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:22.927638+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:23.927835+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:24.927974+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:25.928140+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:26.928351+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:27.928489+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:28.928644+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:29.928806+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:30.928948+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:31.929123+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:32.929346+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:33.929472+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:34.929602+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:35.929749+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:36.929928+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:37.930093+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:38.930240+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:39.930417+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:40.930579+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:41.930741+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:42.930879+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:43.931080+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:44.931210+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:45.931354+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:46.931576+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:47.931824+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:48.931993+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:49.932129+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:50.932323+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:51.932475+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:52.932656+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:53.932772+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:54.932932+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:55.933133+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:56.933427+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:57.933631+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:58.933848+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:59.934049+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:00.934221+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:01.934417+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:02.934598+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:03.935626+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:04.935817+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:05.935984+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:06.936167+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:07.936342+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:08.936493+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:09.936801+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.909240723s of 600.174255371s, submitted: 90
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:10.936946+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 1900544 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858718 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:11.937077+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:12.937314+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:13.937557+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:14.937703+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:15.937864+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:16.938098+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:17.938245+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:18.938415+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:19.938583+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:20.938726+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:21.938873+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:22.939038+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:23.939192+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:24.939340+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:25.939463+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:26.939627+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:27.939809+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:28.939998+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:29.940133+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:30.940338+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:31.940540+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:32.940746+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:33.940969+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:34.941171+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:35.941331+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:36.941530+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:37.941716+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:38.941884+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:39.942040+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:40.942211+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:41.942405+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:42.942580+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:43.942752+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:44.942894+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:45.943053+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:46.943318+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:47.943486+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:48.943679+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:49.943851+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:50.943985+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:51.944129+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:52.944353+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:53.944504+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:54.944642+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:55.944764+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:56.944960+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:57.945148+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:58.945307+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:59.945426+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:00.945553+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:01.945701+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:02.945856+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:03.945982+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:04.946168+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:05.946342+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:06.946562+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:07.946740+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:08.946914+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:09.947172+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:10.947323+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:11.947566+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:12.947726+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:13.947866+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:14.948069+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:15.948215+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:16.948469+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:17.948672+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:18.948878+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:19.949048+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:20.949191+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:21.949327+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:22.949494+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:23.949701+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:24.949835+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:25.949998+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:26.950252+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:27.950561+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:28.952225+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:29.952796+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:30.953358+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:31.953858+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:32.954842+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:33.955747+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:34.956097+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:35.956476+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:36.956900+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:37.957320+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:38.957681+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:39.958005+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:40.958233+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:41.958346+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:42.958556+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:43.958833+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:44.959012+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:45.959178+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:46.959334+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:47.959466+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:48.959699+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:49.959862+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:50.959991+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:51.960140+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:52.960352+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:53.960548+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:54.960758+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:55.960948+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:56.961144+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:57.961365+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:58.961498+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:59.961650+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:00.961833+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:01.962041+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:02.962316+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:03.962496+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:04.962627+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:05.962756+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:06.963023+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:07.963233+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:08.963382+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:09.963537+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:10.963674+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:11.963828+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:12.964043+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:13.964229+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:14.964383+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:15.964574+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:16.964728+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:17.964892+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:18.965098+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:19.965340+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:20.965519+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:21.965717+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:22.965942+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:23.966154+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:24.966350+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:25.966535+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:26.966750+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:27.966964+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:28.967150+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:29.967353+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:30.967623+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:31.967932+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:32.968195+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:33.968412+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:34.968617+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:35.968768+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:36.969006+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:37.969236+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:38.969518+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:39.969688+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:40.969846+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:41.970054+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:42.970245+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:43.970441+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:44.970601+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:45.970750+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:46.970965+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:47.971090+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:48.971247+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:49.971454+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:50.971660+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:51.971853+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:52.972038+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:53.972195+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:54.972334+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:55.972444+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:56.972614+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:57.972726+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:58.972900+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:59.973037+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:00.973198+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:01.973336+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:02.973457+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:03.973631+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:04.973818+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:05.974054+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:06.974347+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:07.974610+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:08.974922+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:09.975133+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:10.975337+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:11.975582+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:12.975814+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:13.976061+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:14.976190+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:15.976427+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:16.976685+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:17.976841+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:18.976974+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:19.977196+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:20.977386+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:21.977613+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:22.977869+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:23.978070+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:24.978583+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:25.978747+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:26.978918+0000)
Nov 29 05:45:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 05:45:34 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2520517162' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:27.979068+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:28.979241+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:29.979332+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x55909a3a6000
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 1802240 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:30.979507+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 200.354232788s of 200.571792603s, submitted: 90
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 1802240 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:31.979665+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 1777664 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 867538 data_alloc: 218103808 data_used: 233472
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:32.979868+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 123 ms_handle_reset con 0x55909a3a6000 session 0x559099864960
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 1753088 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:33.980008+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x559097d2ac00
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fca39000/0x0/0x4ffc00000, data 0x12cfa4/0x1e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 18464768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:34.980168+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 124 ms_handle_reset con 0x559097d2ac00 session 0x559099b63a40
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 18456576 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba3a000/0x0/0x4ffc00000, data 0x112cfb3/0x11e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:35.980327+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 18456576 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:36.980506+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 18456576 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986920 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:37.980660+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 18440192 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:38.980795+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x112eb4c/0x11e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 18440192 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x112eb4c/0x11e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:39.980943+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:40.981135+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:41.981298+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989894 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:42.981451+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:43.981624+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:44.981788+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 18399232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:45.981939+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:46.982189+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989894 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:47.982366+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:48.982497+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:49.982627+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:50.982790+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:51.982947+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989894 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:52.983089+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x559098d57c00
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.313594818s of 22.573022842s, submitted: 39
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:53.983293+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 18251776 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:54.983507+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 10
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 18194432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x113532e/0x11ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:55.983690+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 18194432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:56.983875+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x5590981f0800
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 18104320 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992562 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:57.984049+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 17047552 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:58.984221+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba23000/0x0/0x4ffc00000, data 0x1140c8a/0x11fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 17047552 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:59.984368+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 17047552 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:00.984494+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 15917056 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba23000/0x0/0x4ffc00000, data 0x1140c8a/0x11fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:01.984627+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 16089088 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994068 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:02.984739+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 11
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 15966208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:03.984888+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.530145645s of 10.658089638s, submitted: 43
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 15966208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:04.985023+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 15966208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:05.985168+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x11533a0/0x120e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 15925248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:06.985353+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 15818752 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998324 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:07.985491+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 15745024 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:08.985624+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 15745024 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:09.985743+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 15720448 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:10.986185+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba03000/0x0/0x4ffc00000, data 0x115f605/0x121b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 15687680 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:11.986625+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 14639104 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997130 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:12.987790+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 14524416 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x11678e1/0x1222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:13.988796+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.448275566s of 10.000021935s, submitted: 39
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 14401536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:14.989340+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9f3000/0x0/0x4ffc00000, data 0x117085c/0x122b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 14376960 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:15.990037+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 14352384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:16.990304+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 14286848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000216 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:17.990683+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 14286848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:18.990887+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 14278656 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:19.991086+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 14254080 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:20.991251+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9e7000/0x0/0x4ffc00000, data 0x117c65f/0x1237000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 14196736 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:21.991670+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 14196736 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998990 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:22.991981+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 14098432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:23.992200+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 14098432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.366621971s of 10.501939774s, submitted: 33
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:24.992403+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 14057472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:25.992587+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9e4000/0x0/0x4ffc00000, data 0x1180231/0x123a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 14008320 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:26.992867+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 14008320 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:27.993040+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001710 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:28.993376+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:29.993524+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:30.993712+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9da000/0x0/0x4ffc00000, data 0x118a393/0x1244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:31.993912+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9da000/0x0/0x4ffc00000, data 0x118a393/0x1244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:32.995053+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002326 data_alloc: 218103808 data_used: 249856
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 14147584 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:33.995345+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 14139392 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:34.995520+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb9d1000/0x0/0x4ffc00000, data 0x1191054/0x124c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 14106624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:35.995725+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 14106624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:36.995923+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 14106624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.712274551s of 13.102365494s, submitted: 45
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:37.996138+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005064 data_alloc: 218103808 data_used: 258048
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 14057472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:38.996329+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 14057472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:39.996523+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 13959168 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb9c4000/0x0/0x4ffc00000, data 0x119c6f7/0x125a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:40.996734+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 13910016 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:41.996887+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 13811712 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:42.996965+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008394 data_alloc: 218103808 data_used: 258048
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb9c0000/0x0/0x4ffc00000, data 0x11a077b/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 13778944 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:43.997156+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 13778944 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:44.997340+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9ba000/0x0/0x4ffc00000, data 0x11a636d/0x1264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 11427840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa801000/0x0/0x4ffc00000, data 0x11bb43e/0x127c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:45.997514+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 11427840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:46.997705+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 12
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 11370496 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.686246872s of 10.000534058s, submitted: 55
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x559096ee4400
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:47.997932+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014752 data_alloc: 218103808 data_used: 266240
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86482944 unmapped: 10985472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:48.998070+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 10928128 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:49.998174+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 10862592 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:50.998316+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7e2000/0x0/0x4ffc00000, data 0x11dabf8/0x129c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7e3000/0x0/0x4ffc00000, data 0x11dab4b/0x129b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 10862592 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:51.998436+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 10862592 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:52.998518+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020594 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 10805248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:53.998625+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e9505/0x12aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 10805248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:54.998730+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 10805248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:55.998854+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 10485760 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:56.998983+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 10485760 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.237507820s of 10.010437965s, submitted: 53
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:57.999088+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025372 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 10502144 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:58.999233+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 10592256 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7a7000/0x0/0x4ffc00000, data 0x121746f/0x12d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [0,0,2])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:59.999350+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 10543104 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:00.999497+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa794000/0x0/0x4ffc00000, data 0x1227b64/0x12ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 10543104 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:01.999667+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa795000/0x0/0x4ffc00000, data 0x1227b32/0x12e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 10518528 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:02.999795+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034972 data_alloc: 218103808 data_used: 278528
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 10461184 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:03.999942+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 10461184 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:05.000051+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fa76f000/0x0/0x4ffc00000, data 0x124b864/0x130e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88006656 unmapped: 9461760 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:06.000191+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9330688 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:07.000392+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 9256960 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:08.000507+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042542 data_alloc: 218103808 data_used: 274432
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.984232903s of 10.380507469s, submitted: 144
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 9404416 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:09.000629+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89112576 unmapped: 8355840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:10.000813+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa330000/0x0/0x4ffc00000, data 0x1279979/0x133c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 8323072 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:11.000967+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89219072 unmapped: 8249344 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:12.001135+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa317000/0x0/0x4ffc00000, data 0x12928da/0x1355000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89227264 unmapped: 8241152 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:13.001316+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044072 data_alloc: 218103808 data_used: 274432
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa316000/0x0/0x4ffc00000, data 0x12959b1/0x1358000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89161728 unmapped: 8306688 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:14.001467+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 8167424 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:15.001623+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 8167424 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:16.002153+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 8118272 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:17.002342+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89169920 unmapped: 8298496 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:18.002477+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053172 data_alloc: 218103808 data_used: 282624
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89169920 unmapped: 8298496 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:19.003196+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.369832993s of 10.755517006s, submitted: 131
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa2e4000/0x0/0x4ffc00000, data 0x12c3c14/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa2d6000/0x0/0x4ffc00000, data 0x12d2acb/0x1398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 8159232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:20.003324+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89432064 unmapped: 8036352 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:21.003419+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89432064 unmapped: 8036352 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:22.003543+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 7979008 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:23.003681+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058860 data_alloc: 218103808 data_used: 286720
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa2bd000/0x0/0x4ffc00000, data 0x12ea68a/0x13b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89595904 unmapped: 7872512 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:24.003857+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 7700480 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:25.141879+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90816512 unmapped: 6651904 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:26.141995+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 6586368 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:27.142130+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa2aa000/0x0/0x4ffc00000, data 0x12fa7c6/0x13c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90431488 unmapped: 7036928 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:28.142248+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061434 data_alloc: 218103808 data_used: 294912
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90464256 unmapped: 7004160 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.862014771s of 10.027328491s, submitted: 53
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:29.142464+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 7020544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:30.142754+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 7020544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:31.142908+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa274000/0x0/0x4ffc00000, data 0x13323fc/0x13fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 7020544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:32.143155+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa274000/0x0/0x4ffc00000, data 0x13323fc/0x13fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90939392 unmapped: 6529024 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:33.143295+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069084 data_alloc: 218103808 data_used: 294912
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 6324224 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:34.143432+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 6324224 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:35.143784+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa23b000/0x0/0x4ffc00000, data 0x136a897/0x1433000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90947584 unmapped: 6520832 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:36.143900+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91037696 unmapped: 6430720 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:37.144042+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 6373376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:38.144205+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072284 data_alloc: 218103808 data_used: 303104
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90783744 unmapped: 6684672 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:39.144419+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa217000/0x0/0x4ffc00000, data 0x138e5f8/0x1457000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.346149445s of 10.593280792s, submitted: 69
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 6586368 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:40.144595+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90456064 unmapped: 7012352 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:41.144781+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 6963200 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:42.144980+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa207000/0x0/0x4ffc00000, data 0x139b281/0x1466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 6955008 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:43.145220+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075678 data_alloc: 218103808 data_used: 315392
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 6955008 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:44.145394+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 5840896 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:45.145541+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 5832704 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:46.145676+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1f3000/0x0/0x4ffc00000, data 0x13b0953/0x147b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 5832704 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:47.145885+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 5726208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:48.146082+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076688 data_alloc: 218103808 data_used: 315392
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 5726208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:49.146387+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1e8000/0x0/0x4ffc00000, data 0x13bd0c1/0x1486000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 5726208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:50.146551+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.637351036s of 10.712457657s, submitted: 33
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 5578752 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:51.146691+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 5578752 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:52.146841+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 5554176 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:53.146987+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079056 data_alloc: 218103808 data_used: 315392
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1dd000/0x0/0x4ffc00000, data 0x13c7cd4/0x1491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92119040 unmapped: 5349376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:54.147171+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92119040 unmapped: 5349376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:55.147350+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 5341184 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:56.147554+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92233728 unmapped: 5234688 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:57.147907+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1b7000/0x0/0x4ffc00000, data 0x13ee245/0x14b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 5185536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:58.148066+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081868 data_alloc: 218103808 data_used: 315392
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 5185536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:59.148193+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa18c000/0x0/0x4ffc00000, data 0x1418d7a/0x14e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 4890624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:00.148353+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.908507347s of 10.015370369s, submitted: 35
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 4890624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:01.148548+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 4931584 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:02.148698+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:03.148815+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082444 data_alloc: 218103808 data_used: 315392
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:04.148965+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa179000/0x0/0x4ffc00000, data 0x142b56b/0x14f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:05.149290+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 5332992 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:06.149411+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa179000/0x0/0x4ffc00000, data 0x142b56b/0x14f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa16e000/0x0/0x4ffc00000, data 0x143691e/0x1500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 5332992 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:07.149586+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa16e000/0x0/0x4ffc00000, data 0x143691e/0x1500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92299264 unmapped: 5169152 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:08.149740+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085216 data_alloc: 218103808 data_used: 315392
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92438528 unmapped: 5029888 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:09.149880+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 5693440 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:10.150032+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 5693440 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:11.150159+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.764957428s of 11.050184250s, submitted: 23
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 ms_handle_reset con 0x559096ee4400 session 0x55909a371a40
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:12.150290+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa153000/0x0/0x4ffc00000, data 0x1451ca7/0x151b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 13
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 5087232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:13.150571+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083152 data_alloc: 218103808 data_used: 315392
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 5087232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:14.150760+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 3866624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:15.150901+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 3801088 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:16.151050+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 3801088 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:17.151194+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93904896 unmapped: 3563520 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:18.151347+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa11d000/0x0/0x4ffc00000, data 0x1486bbb/0x1551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094026 data_alloc: 218103808 data_used: 323584
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94060544 unmapped: 3407872 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:19.151473+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 3301376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:20.151618+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2981888 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:21.151761+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0d7000/0x0/0x4ffc00000, data 0x14cad4a/0x1597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.159432411s of 10.446393013s, submitted: 280
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2981888 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:22.151927+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:23.152103+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 2949120 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101764 data_alloc: 218103808 data_used: 327680
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0c2000/0x0/0x4ffc00000, data 0x14e0882/0x15ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:24.152245+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 3235840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:25.152356+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 3014656 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:26.152498+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 3014656 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:27.152709+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 2924544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:28.152922+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94617600 unmapped: 2850816 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102396 data_alloc: 218103808 data_used: 335872
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:29.153075+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa0a0000/0x0/0x4ffc00000, data 0x1500acb/0x15ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:30.153233+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:31.153395+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:32.153622+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.922378540s of 11.010833740s, submitted: 41
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:33.153771+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107168 data_alloc: 218103808 data_used: 335872
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa078000/0x0/0x4ffc00000, data 0x1528089/0x15f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:34.154121+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:35.154342+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:36.154469+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa078000/0x0/0x4ffc00000, data 0x1528089/0x15f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:37.154643+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:38.154836+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95330304 unmapped: 2138112 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105844 data_alloc: 218103808 data_used: 335872
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:39.155062+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:40.155221+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:41.155378+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa04b000/0x0/0x4ffc00000, data 0x1554fba/0x1623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:42.155546+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 1671168 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.918478012s of 10.029466629s, submitted: 25
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:43.155674+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 1703936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110248 data_alloc: 218103808 data_used: 335872
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:44.155868+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 1703936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:45.156024+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 1654784 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1560189/0x162e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:46.156170+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 1654784 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:47.156321+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 1654784 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:48.156482+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95830016 unmapped: 1638400 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111578 data_alloc: 218103808 data_used: 335872
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:49.156642+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 1523712 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa013000/0x0/0x4ffc00000, data 0x158d904/0x165b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:50.156831+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97034240 unmapped: 1482752 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0x158d933/0x165a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:51.156939+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 2342912 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:52.157072+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 2342912 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.550374985s of 10.174007416s, submitted: 36
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:53.157136+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 2375680 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121358 data_alloc: 218103808 data_used: 335872
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:54.157325+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96305152 unmapped: 2211840 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:55.157458+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96305152 unmapped: 2211840 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:56.157576+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96305152 unmapped: 2211840 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15c4606/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:57.157756+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 1343488 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:58.157913+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 1343488 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122254 data_alloc: 218103808 data_used: 335872
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f9fa4000/0x0/0x4ffc00000, data 0x15fb1f5/0x16ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:59.158081+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 1343488 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f9fa4000/0x0/0x4ffc00000, data 0x15fb25a/0x16ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:00.158210+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 3039232 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:01.158332+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 3039232 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:02.158464+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 3039232 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:03.158674+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 2867200 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121988 data_alloc: 218103808 data_used: 335872
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.927287102s of 10.717306137s, submitted: 67
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:04.158825+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 2473984 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9f27000/0x0/0x4ffc00000, data 0x167852e/0x1746000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:05.158934+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97140736 unmapped: 2424832 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:06.159069+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97394688 unmapped: 2170880 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:07.159209+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98451456 unmapped: 1114112 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:08.159416+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98451456 unmapped: 1114112 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140216 data_alloc: 218103808 data_used: 344064
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:09.159532+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 876544 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:10.159788+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 1556480 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9ee6000/0x0/0x4ffc00000, data 0x16b7576/0x1788000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:11.159970+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 1556480 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:12.160178+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98156544 unmapped: 1409024 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:13.160351+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98213888 unmapped: 1351680 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142228 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.739322662s of 10.120580673s, submitted: 129
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:14.160496+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98320384 unmapped: 2293760 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9e71000/0x0/0x4ffc00000, data 0x172a418/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:15.160702+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9e63000/0x0/0x4ffc00000, data 0x1738cfd/0x180b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:16.160833+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:17.160989+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:18.161123+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9e60000/0x0/0x4ffc00000, data 0x173b74a/0x180e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148780 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:19.161255+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 1925120 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:20.161477+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 1925120 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:21.161624+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 1703936 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:22.161806+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 2752512 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e04000/0x0/0x4ffc00000, data 0x17943da/0x1868000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:23.161970+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 2686976 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160660 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:24.162079+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99180544 unmapped: 2482176 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:25.162213+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.834702492s of 11.411822319s, submitted: 75
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 2375680 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:26.162310+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 2375680 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:27.162443+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9dc9000/0x0/0x4ffc00000, data 0x17d0f56/0x18a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99475456 unmapped: 2187264 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:28.162767+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 2056192 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156908 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:29.162897+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 1925120 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d7b000/0x0/0x4ffc00000, data 0x181d32b/0x18f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:30.163162+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 1728512 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:31.163322+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 1728512 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:32.163713+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d69000/0x0/0x4ffc00000, data 0x1831591/0x1905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 1720320 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:33.163861+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100524032 unmapped: 2187264 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d2f000/0x0/0x4ffc00000, data 0x186b11c/0x193f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168314 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:34.164033+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100524032 unmapped: 2187264 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:35.164152+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100524032 unmapped: 2187264 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d2f000/0x0/0x4ffc00000, data 0x186b11c/0x193f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:36.164335+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.561640739s of 10.917224884s, submitted: 76
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 1925120 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:37.164540+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 1957888 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:38.164688+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 1908736 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177168 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:39.164839+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9cd1000/0x0/0x4ffc00000, data 0x18c9181/0x199d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 1589248 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:40.164966+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101900288 unmapped: 1859584 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:41.165146+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101957632 unmapped: 1802240 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c83000/0x0/0x4ffc00000, data 0x19185d2/0x19eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:42.165294+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 1474560 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:43.165408+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101187584 unmapped: 3620864 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182486 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:44.165590+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101244928 unmapped: 3563520 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c20000/0x0/0x4ffc00000, data 0x197b760/0x1a4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:45.165751+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101679104 unmapped: 3129344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:46.165889+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101679104 unmapped: 3129344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.754183769s of 10.750842094s, submitted: 94
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:47.166059+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102735872 unmapped: 2072576 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9bf1000/0x0/0x4ffc00000, data 0x19aad7e/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:48.166208+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103104512 unmapped: 1703936 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189048 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:49.166400+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 1622016 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:50.166573+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103211008 unmapped: 1597440 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:51.166725+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 1589248 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:52.166872+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 1589248 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b80000/0x0/0x4ffc00000, data 0x1a1a472/0x1aee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:53.167045+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 1589248 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194272 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:54.167227+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 1933312 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:55.167425+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 1933312 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:56.167587+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 1933312 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b48000/0x0/0x4ffc00000, data 0x1a52e2b/0x1b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:57.167802+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103055360 unmapped: 2801664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.477082253s of 10.629286766s, submitted: 73
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:58.167991+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 1712128 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200826 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9af5000/0x0/0x4ffc00000, data 0x1aa5978/0x1b79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:59.168160+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104153088 unmapped: 1703936 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:00.168369+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 1359872 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9af6000/0x0/0x4ffc00000, data 0x1aa59e0/0x1b78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:01.168508+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 1359872 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:02.168661+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103063552 unmapped: 2793472 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:03.168808+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103383040 unmapped: 3522560 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202774 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:04.168954+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 3514368 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:05.169086+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9a8d000/0x0/0x4ffc00000, data 0x1b0df35/0x1be1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 3506176 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:06.169199+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104857600 unmapped: 2048000 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:07.169341+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 1949696 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.102632523s of 10.059342384s, submitted: 95
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:08.169452+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105545728 unmapped: 2408448 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223360 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:09.169838+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 2351104 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:10.170027+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 2342912 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:11.170143+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f95b0000/0x0/0x4ffc00000, data 0x1bdaf46/0x1cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 2326528 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:12.170339+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105889792 unmapped: 2064384 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:13.170522+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 2039808 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224156 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:14.170664+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c1dff5/0x1cf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 2039808 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c1dff5/0x1cf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:15.170825+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 1802240 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:16.170992+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 1785856 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:17.171193+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 1703936 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922871590s of 10.191562653s, submitted: 86
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:18.171320+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9532000/0x0/0x4ffc00000, data 0x1c57e35/0x1d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 1531904 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219940 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:19.171451+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 1531904 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9532000/0x0/0x4ffc00000, data 0x1c57e35/0x1d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,1])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:20.171608+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 1433600 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:21.171760+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 1327104 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:22.171929+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94fe000/0x0/0x4ffc00000, data 0x1c8d4d9/0x1d60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 1327104 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:23.172065+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 2359296 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226856 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94e8000/0x0/0x4ffc00000, data 0x1ca3492/0x1d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:24.172192+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109150208 unmapped: 901120 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:25.172424+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109150208 unmapped: 901120 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:26.172579+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109150208 unmapped: 901120 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:27.172683+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94c4000/0x0/0x4ffc00000, data 0x1cc71b5/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 548864 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.210618019s of 10.000334740s, submitted: 57
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94c4000/0x0/0x4ffc00000, data 0x1cc71b5/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:28.173064+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f82f7000/0x0/0x4ffc00000, data 0x1cf41ea/0x1dc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b3f9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 466944 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237272 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:29.173304+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109600768 unmapped: 450560 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:30.173459+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 1146880 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:31.173576+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 108994560 unmapped: 2105344 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:32.173687+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7107000/0x0/0x4ffc00000, data 0x1d42634/0x1e17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 786432 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:33.173888+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 573440 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240492 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:34.174000+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:35.174111+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f3000/0x0/0x4ffc00000, data 0x1d559c8/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:36.174213+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:37.174323+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.811102867s of 10.000490189s, submitted: 66
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:38.174468+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f3000/0x0/0x4ffc00000, data 0x1d559c8/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237784 data_alloc: 218103808 data_used: 368640
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:39.174677+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:40.174861+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:41.175063+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f3000/0x0/0x4ffc00000, data 0x1d559c8/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:42.175233+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:43.175401+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236390 data_alloc: 218103808 data_used: 368640
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:44.175632+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f5000/0x0/0x4ffc00000, data 0x1d55a5c/0x1e29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:45.175765+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:46.175931+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:47.176114+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.261400223s of 10.000162125s, submitted: 21
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:48.176328+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f70f0000/0x0/0x4ffc00000, data 0x1d5755a/0x1e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241964 data_alloc: 218103808 data_used: 376832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:49.176436+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f70f0000/0x0/0x4ffc00000, data 0x1d5755a/0x1e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:50.176570+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:51.176716+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:52.176825+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:53.176928+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244584 data_alloc: 218103808 data_used: 385024
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:54.177062+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f70ee000/0x0/0x4ffc00000, data 0x1d5929e/0x1e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:55.177210+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:56.177345+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f70ee000/0x0/0x4ffc00000, data 0x1d5929e/0x1e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:57.177507+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f70ed000/0x0/0x4ffc00000, data 0x1d59339/0x1e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 1605632 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.875500679s of 10.000182152s, submitted: 35
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:58.177778+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 1597440 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247528 data_alloc: 218103808 data_used: 385024
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:59.177959+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 1597440 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:00.178153+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70ec000/0x0/0x4ffc00000, data 0x1d59466/0x1e31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:01.178315+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:02.178470+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:03.178590+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251238 data_alloc: 218103808 data_used: 393216
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:04.178720+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:05.178889+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:06.179027+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70ea000/0x0/0x4ffc00000, data 0x1d5afcb/0x1e34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70ea000/0x0/0x4ffc00000, data 0x1d5afcb/0x1e34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:07.179208+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 1581056 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.932135582s of 10.000720978s, submitted: 29
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:08.179319+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 1581056 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250570 data_alloc: 218103808 data_used: 393216
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:09.179440+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 1581056 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:10.179594+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 14
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 1564672 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:11.179745+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 1564672 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70eb000/0x0/0x4ffc00000, data 0x1d5b05f/0x1e33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:12.179918+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2999 syncs, 3.64 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3872 writes, 13K keys, 3872 commit groups, 1.0 writes per commit group, ingest: 20.11 MB, 0.03 MB/s
                                           Interval WAL: 3872 writes, 1699 syncs, 2.28 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 1564672 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x55909995d800
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:13.180050+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70eb000/0x0/0x4ffc00000, data 0x1d5b196/0x1e33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255762 data_alloc: 218103808 data_used: 393216
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:14.180217+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e6000/0x0/0x4ffc00000, data 0x1d5b3c2/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e6000/0x0/0x4ffc00000, data 0x1d5b3c2/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:15.180338+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:16.180536+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:17.180771+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5b552/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.908580780s of 10.000893593s, submitted: 23
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:18.180882+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 ms_handle_reset con 0x5590971abc00 session 0x559096728f00
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x5590972f6c00
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 1540096 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e9000/0x0/0x4ffc00000, data 0x1d5b5ba/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254688 data_alloc: 218103808 data_used: 393216
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:19.181010+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 ms_handle_reset con 0x55909a3ba400 session 0x5590999f4000
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x559098d80000
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 1531904 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 ms_handle_reset con 0x55909a3b9000 session 0x559099b630e0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x5590997d8800
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:20.181126+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 1531904 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:21.181303+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:22.181447+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:23.181594+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255686 data_alloc: 218103808 data_used: 393216
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:24.181748+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5b6ec/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:25.181809+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 2564096 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:26.181976+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 2564096 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:27.182181+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e7000/0x0/0x4ffc00000, data 0x1d5b817/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.950098038s of 10.003911972s, submitted: 16
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e7000/0x0/0x4ffc00000, data 0x1d5b817/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:28.182298+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258052 data_alloc: 218103808 data_used: 393216
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:29.182435+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:30.182594+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:31.182739+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:32.182858+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e7000/0x0/0x4ffc00000, data 0x1d5b9bb/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 2547712 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:33.183032+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 2547712 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257758 data_alloc: 218103808 data_used: 393216
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:34.183180+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:35.183312+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:36.183503+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:37.183756+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.925294876s of 10.001269341s, submitted: 25
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bab8/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:38.183956+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256700 data_alloc: 218103808 data_used: 393216
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:39.184129+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bab8/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:40.184285+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:41.184434+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:42.184567+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:43.184680+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e9000/0x0/0x4ffc00000, data 0x1d5bb3e/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256876 data_alloc: 218103808 data_used: 393216
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:44.184809+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:45.184928+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:46.185049+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:47.185194+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bb82/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:48.185311+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.725295067s of 10.793285370s, submitted: 20
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258948 data_alloc: 218103808 data_used: 393216
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:49.185471+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:50.185581+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:51.185790+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bcd1/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:52.185913+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:53.186074+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bcd1/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 2498560 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264654 data_alloc: 218103808 data_used: 393216
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:54.186250+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 2449408 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:55.186440+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 2211840 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:56.186611+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 2211840 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:57.186783+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 770048 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7085000/0x0/0x4ffc00000, data 0x1dba787/0x1e96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:58.186930+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.767315865s of 10.010424614s, submitted: 67
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 688128 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f706e000/0x0/0x4ffc00000, data 0x1dd359e/0x1eb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280744 data_alloc: 218103808 data_used: 397312
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:59.187072+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 671744 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f701b000/0x0/0x4ffc00000, data 0x1e24c16/0x1f00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:00.187177+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 122880 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:01.187307+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 2113536 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:02.187452+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f6fdd000/0x0/0x4ffc00000, data 0x1e671fe/0x1f41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 958464 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:03.187576+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f6fad000/0x0/0x4ffc00000, data 0x1e976c1/0x1f71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 1638400 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278656 data_alloc: 218103808 data_used: 401408
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:04.187718+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 1638400 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:05.187833+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 1638400 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:06.187998+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 1515520 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:07.188163+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 1515520 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:08.188325+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f6f78000/0x0/0x4ffc00000, data 0x1ecb3cc/0x1fa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.705419540s of 10.009120941s, submitted: 105
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 1507328 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285022 data_alloc: 218103808 data_used: 401408
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:09.188457+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 1507328 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:10.188602+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 2146304 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f6f78000/0x0/0x4ffc00000, data 0x1ecba62/0x1fa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:11.188706+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 2072576 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:12.188827+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6f40000/0x0/0x4ffc00000, data 0x1f00b90/0x1fdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [1])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 15
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 2244608 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:13.188966+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 2154496 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296348 data_alloc: 218103808 data_used: 409600
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:14.189100+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 2138112 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:15.189287+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6ef6000/0x0/0x4ffc00000, data 0x1f4b216/0x2028000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 2138112 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:16.189431+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 2138112 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:17.189577+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 1032192 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:18.189721+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 802816 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300896 data_alloc: 218103808 data_used: 409600
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:19.189834+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6ebf000/0x0/0x4ffc00000, data 0x1f817d1/0x205f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.160684586s of 10.642349243s, submitted: 176
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 638976 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:20.189953+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6eac000/0x0/0x4ffc00000, data 0x1f9425f/0x2072000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 581632 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:21.190078+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 581632 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:22.190382+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 2433024 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:23.190511+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f6e58000/0x0/0x4ffc00000, data 0x1fe61cd/0x20c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 2433024 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:24.190654+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308970 data_alloc: 218103808 data_used: 417792
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f6e58000/0x0/0x4ffc00000, data 0x1fe61cd/0x20c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 2220032 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:25.190786+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 3072000 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:26.190897+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 3022848 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:27.191086+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 2834432 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:28.191214+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2883584 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:29.191335+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315368 data_alloc: 218103808 data_used: 425984
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.695703506s of 10.406598091s, submitted: 85
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 1744896 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:30.191447+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f6dff000/0x0/0x4ffc00000, data 0x203cc62/0x211e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 1572864 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:31.191588+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 2613248 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:32.191716+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 2703360 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f6ddf000/0x0/0x4ffc00000, data 0x205ad14/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:33.191857+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 3145728 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:34.192003+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335690 data_alloc: 218103808 data_used: 430080
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 3145728 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:35.195016+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 3137536 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:36.195226+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 2678784 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:37.195493+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 1474560 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:38.195624+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2163ea1/0x2246000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 1400832 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:39.195743+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340182 data_alloc: 218103808 data_used: 438272
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119455744 unmapped: 1081344 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:40.195853+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.213871956s of 10.749114037s, submitted: 134
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 917504 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:41.195987+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 156 ms_handle_reset con 0x55909995d800 session 0x5590972f52c0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119996416 unmapped: 1589248 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:42.196157+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 2244608 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:43.196355+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 16
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 2244608 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:44.196560+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348512 data_alloc: 218103808 data_used: 446464
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f6c5b000/0x0/0x4ffc00000, data 0x21de3be/0x22c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 2121728 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:45.196775+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 2023424 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:46.196942+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 2072576 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:47.197166+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 1875968 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:48.197383+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 2105344 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:49.197569+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362664 data_alloc: 218103808 data_used: 462848
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6baa000/0x0/0x4ffc00000, data 0x2286816/0x2372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120537088 unmapped: 2097152 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6baa000/0x0/0x4ffc00000, data 0x2286816/0x2372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:50.197690+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.560856819s of 10.171666145s, submitted: 344
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 2088960 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:51.197857+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6baa000/0x0/0x4ffc00000, data 0x2286845/0x2372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 1810432 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:52.197991+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6b86000/0x0/0x4ffc00000, data 0x22ac1e5/0x2398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6b77000/0x0/0x4ffc00000, data 0x22ba7b7/0x23a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [0,0,0,2])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 1736704 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:53.198151+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 1736704 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:54.198321+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376386 data_alloc: 218103808 data_used: 462848
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121200640 unmapped: 1433600 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6b49000/0x0/0x4ffc00000, data 0x22e8d2a/0x23d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:55.198469+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 1605632 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:56.198585+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 1441792 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:57.198762+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1327104 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:58.198898+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x559098d69400
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 1138688 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:59.199044+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1394826 data_alloc: 218103808 data_used: 462848
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6a9c000/0x0/0x4ffc00000, data 0x2391dee/0x2481000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 1138688 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:00.199200+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.695484161s of 10.078989983s, submitted: 75
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 17
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 1794048 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:01.199363+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 1794048 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:02.199509+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 3088384 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:03.199674+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 3088384 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:04.199857+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399958 data_alloc: 218103808 data_used: 483328
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 3088384 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f6602000/0x0/0x4ffc00000, data 0x2419885/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:05.200089+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 2834432 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:06.200303+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 2785280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:07.200501+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 2785280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:08.200621+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 2785280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:09.200761+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400984 data_alloc: 218103808 data_used: 479232
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f6592000/0x0/0x4ffc00000, data 0x248d5aa/0x257c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122208256 unmapped: 2523136 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:10.200869+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.681946754s of 10.010542870s, submitted: 132
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 1458176 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:11.200987+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 1458176 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:12.201071+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122347520 unmapped: 2383872 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:13.201236+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 2154496 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:14.201393+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409928 data_alloc: 218103808 data_used: 487424
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 2146304 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:15.201546+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6548000/0x0/0x4ffc00000, data 0x24d8331/0x25c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:16.201693+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:17.201975+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6548000/0x0/0x4ffc00000, data 0x24d8331/0x25c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:18.202118+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:19.202298+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407850 data_alloc: 218103808 data_used: 487424
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:20.202461+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:21.202559+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6538000/0x0/0x4ffc00000, data 0x24e7ec8/0x25d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:22.202668+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6538000/0x0/0x4ffc00000, data 0x24e7ec8/0x25d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.926156998s of 12.125965118s, submitted: 33
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 1818624 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:23.202785+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 1818624 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:24.202954+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:25.203091+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:26.203204+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:27.203411+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:28.203544+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:29.203700+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:30.203858+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:31.203981+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:32.204139+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:33.204306+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:34.204484+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:35.204594+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:36.204710+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:37.204947+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:38.205146+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:39.205345+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:40.205542+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:41.205689+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:42.205843+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:43.206051+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:44.206199+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:45.206375+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:46.206561+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:47.206865+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:48.206982+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:49.207176+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:50.207358+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:51.207527+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:52.207663+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.994432449s of 29.999633789s, submitted: 1
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:53.207815+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123117568 unmapped: 1613824 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:54.207973+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123117568 unmapped: 1613824 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408722 data_alloc: 218103808 data_used: 495616
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f651a000/0x0/0x4ffc00000, data 0x2506322/0x25f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:55.208148+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123117568 unmapped: 1613824 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f651a000/0x0/0x4ffc00000, data 0x2506322/0x25f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:56.208373+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123117568 unmapped: 1613824 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f651a000/0x0/0x4ffc00000, data 0x2506322/0x25f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:57.208609+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123174912 unmapped: 1556480 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:58.208737+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123174912 unmapped: 1556480 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64fa000/0x0/0x4ffc00000, data 0x2525d81/0x2614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64fa000/0x0/0x4ffc00000, data 0x2525d81/0x2614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:59.209017+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123174912 unmapped: 1556480 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409298 data_alloc: 218103808 data_used: 495616
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:00.209130+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64fa000/0x0/0x4ffc00000, data 0x2525d81/0x2614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 1417216 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:01.209247+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 1417216 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:02.209404+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 1417216 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.930094719s of 10.000052452s, submitted: 11
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64c4000/0x0/0x4ffc00000, data 0x255c2b6/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:03.209521+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 1417216 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:04.209641+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 2285568 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412898 data_alloc: 218103808 data_used: 495616
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:05.209749+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 2277376 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:06.209867+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:07.210081+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:08.210230+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64a6000/0x0/0x4ffc00000, data 0x257a4ae/0x2668000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:09.210332+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414468 data_alloc: 218103808 data_used: 495616
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:10.210432+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:11.210539+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64a6000/0x0/0x4ffc00000, data 0x257a4ae/0x2668000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 ms_handle_reset con 0x559098d69400 session 0x55909a3ca1e0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:12.210647+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 1097728 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.509676933s of 10.000202179s, submitted: 215
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f644f000/0x0/0x4ffc00000, data 0x25cf2da/0x26bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:13.210716+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 18
Nov 29 05:45:34 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124846080 unmapped: 933888 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:14.210891+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124846080 unmapped: 933888 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424324 data_alloc: 218103808 data_used: 495616
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:15.211004+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 1163264 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:16.211139+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124780544 unmapped: 2048000 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6438000/0x0/0x4ffc00000, data 0x25e6322/0x26d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:17.211373+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124780544 unmapped: 2048000 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:18.211485+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125829120 unmapped: 2048000 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:19.211580+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125870080 unmapped: 2007040 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6411000/0x0/0x4ffc00000, data 0x260da12/0x26fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426000 data_alloc: 218103808 data_used: 495616
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:20.211706+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125599744 unmapped: 2277376 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f63ec000/0x0/0x4ffc00000, data 0x2633d5a/0x2722000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:21.211842+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125771776 unmapped: 2105344 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:22.211975+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 3342336 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.760641098s of 10.020527840s, submitted: 37
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:23.212113+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 3694592 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f63b9000/0x0/0x4ffc00000, data 0x2666b72/0x2755000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:24.212213+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 3457024 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435388 data_alloc: 218103808 data_used: 503808
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:25.212327+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 3457024 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:26.212453+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125476864 unmapped: 3448832 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:27.212599+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f635c000/0x0/0x4ffc00000, data 0x26c1d9f/0x27b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,4])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:28.212708+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:29.212834+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437270 data_alloc: 218103808 data_used: 503808
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:30.212962+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:31.213100+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:32.213298+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:33.213453+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125837312 unmapped: 3088384 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6324000/0x0/0x4ffc00000, data 0x26fb273/0x27ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,2,1])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.065333366s of 10.991191864s, submitted: 51
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:34.213642+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125837312 unmapped: 3088384 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437020 data_alloc: 218103808 data_used: 499712
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6319000/0x0/0x4ffc00000, data 0x27061cb/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:35.213790+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125837312 unmapped: 3088384 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:36.213954+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 126001152 unmapped: 2924544 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:37.214143+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125968384 unmapped: 2957312 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:38.214246+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125968384 unmapped: 2957312 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:39.214410+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125968384 unmapped: 2957312 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447224 data_alloc: 218103808 data_used: 512000
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f72cf000/0x0/0x4ffc00000, data 0x27703bb/0x285f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:40.214534+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 3743744 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:41.214672+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 3735552 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:42.214845+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 3735552 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f72cc000/0x0/0x4ffc00000, data 0x2774689/0x2862000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:43.215024+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 126238720 unmapped: 2686976 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:44.215208+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 126238720 unmapped: 2686976 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f72af000/0x0/0x4ffc00000, data 0x27915e4/0x287f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 165 handle_osd_map epochs [166,166], i have 166, src has [1,166]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.848725796s of 10.614721298s, submitted: 42
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450028 data_alloc: 218103808 data_used: 520192
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:45.215414+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127303680 unmapped: 2670592 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f72ab000/0x0/0x4ffc00000, data 0x27931fa/0x2882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:46.215707+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127352832 unmapped: 2621440 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:47.216039+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127352832 unmapped: 2621440 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:48.216169+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127410176 unmapped: 2564096 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f728d000/0x0/0x4ffc00000, data 0x27b2240/0x28a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:49.216346+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127410176 unmapped: 2564096 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451364 data_alloc: 218103808 data_used: 520192
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:50.216493+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127410176 unmapped: 2564096 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:51.216636+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:52.216807+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:53.216923+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:54.217065+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7289000/0x0/0x4ffc00000, data 0x27b3cc3/0x28a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453538 data_alloc: 218103808 data_used: 528384
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:55.217198+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:56.217303+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:57.217458+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:58.217581+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.579467773s of 13.778797150s, submitted: 57
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:59.217687+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457502 data_alloc: 218103808 data_used: 536576
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f726e000/0x0/0x4ffc00000, data 0x27cea44/0x28c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:00.217883+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:01.218045+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f726e000/0x0/0x4ffc00000, data 0x27cea44/0x28c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:02.218240+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7256000/0x0/0x4ffc00000, data 0x27e6e38/0x28d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:03.218432+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:04.218554+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457618 data_alloc: 218103808 data_used: 536576
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:05.218675+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:06.218791+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7256000/0x0/0x4ffc00000, data 0x27e6e38/0x28d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:07.218935+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:08.219046+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 3473408 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:09.219227+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 3473408 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466250 data_alloc: 218103808 data_used: 536576
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:10.219357+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127754240 unmapped: 3268608 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:11.219484+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.967459679s of 13.195343018s, submitted: 25
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 3063808 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7208000/0x0/0x4ffc00000, data 0x2833179/0x2926000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:12.219655+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:13.219822+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:14.220017+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463028 data_alloc: 218103808 data_used: 536576
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:15.220164+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:16.220313+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f71c5000/0x0/0x4ffc00000, data 0x2877752/0x2969000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:17.220505+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:18.220667+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128008192 unmapped: 3014656 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:19.220839+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128253952 unmapped: 2768896 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1467576 data_alloc: 218103808 data_used: 536576
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:20.221008+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128270336 unmapped: 2752512 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:21.221220+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7171000/0x0/0x4ffc00000, data 0x28ca067/0x29bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128286720 unmapped: 2736128 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:22.221411+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.849212646s of 10.941687584s, submitted: 26
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128327680 unmapped: 2695168 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:23.221570+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129499136 unmapped: 1523712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:24.221722+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129499136 unmapped: 1523712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473180 data_alloc: 218103808 data_used: 536576
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:25.221904+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129499136 unmapped: 1523712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:26.222066+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129564672 unmapped: 1458176 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f711e000/0x0/0x4ffc00000, data 0x291ced7/0x2a10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:27.222236+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129564672 unmapped: 1458176 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:28.222357+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129581056 unmapped: 2490368 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:29.222670+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129671168 unmapped: 2400256 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479142 data_alloc: 218103808 data_used: 536576
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70f1000/0x0/0x4ffc00000, data 0x294a66d/0x2a3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:30.222842+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129671168 unmapped: 2400256 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:31.222963+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 2228224 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:32.223081+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 2228224 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.386429787s of 10.458808899s, submitted: 28
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:33.223215+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128794624 unmapped: 3276800 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:34.223339+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128827392 unmapped: 3244032 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e129/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476518 data_alloc: 218103808 data_used: 536576
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:35.223476+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e1f3/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:36.223615+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e1f3/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:37.223776+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:38.223958+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:39.224110+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e1f3/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475670 data_alloc: 218103808 data_used: 536576
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:40.224221+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:41.224367+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:42.224533+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.957421303s of 10.000261307s, submitted: 8
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:43.224632+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:44.224906+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e2bd/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477438 data_alloc: 218103808 data_used: 536576
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:45.225104+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:46.225245+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:47.225405+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:48.225578+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:49.225708+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e387/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476748 data_alloc: 218103808 data_used: 536576
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:50.225885+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 3227648 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:51.226054+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70bf000/0x0/0x4ffc00000, data 0x297e3b6/0x2a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 3227648 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:52.226312+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 3227648 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70bf000/0x0/0x4ffc00000, data 0x297e3b6/0x2a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.911422729s of 10.000315666s, submitted: 13
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:53.226447+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:54.226578+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479864 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:55.226741+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:56.226921+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:57.227105+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:58.227354+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f70bb000/0x0/0x4ffc00000, data 0x297ff9c/0x2a72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:59.227509+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479864 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:00.227685+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:01.227805+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 3874816 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:02.227968+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 3874816 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:03.228153+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 3874816 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:04.228302+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128212992 unmapped: 3858432 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482838 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:05.228426+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:06.228599+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:07.228789+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.910813332s of 15.001037598s, submitted: 43
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:08.228939+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:09.229105+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:10.229236+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:11.229372+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:12.229486+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:13.229598+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:14.229698+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:15.231372+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:16.231494+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:17.231680+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:18.231810+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:19.231988+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:20.232114+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:21.232251+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:22.232423+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:23.232575+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:24.232742+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:25.232880+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:26.233056+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:27.233317+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:28.233484+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:29.233631+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:30.233794+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:31.233958+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:32.234114+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:33.234242+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:34.234364+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:35.234486+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:36.234665+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:37.234829+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:38.234949+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:39.235099+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:40.235301+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:41.235471+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:42.235605+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:43.235743+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:44.235887+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:45.236104+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:46.236393+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:47.236682+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:48.236844+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:49.236987+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:50.237158+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:51.237356+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:52.237498+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:53.237893+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:54.238018+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:55.238122+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:56.238353+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:57.238515+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128245760 unmapped: 3825664 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:58.238625+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128245760 unmapped: 3825664 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:59.238754+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128245760 unmapped: 3825664 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:34 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:34 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:00.238907+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128245760 unmapped: 3825664 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:01.239025+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128262144 unmapped: 3809280 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: do_command 'config diff' '{prefix=config diff}'
Nov 29 05:45:34 compute-0 ceph-osd[90181]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 05:45:34 compute-0 ceph-osd[90181]: do_command 'config show' '{prefix=config show}'
Nov 29 05:45:34 compute-0 ceph-osd[90181]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:02.239154+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 05:45:34 compute-0 ceph-osd[90181]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 05:45:34 compute-0 ceph-osd[90181]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 05:45:34 compute-0 ceph-osd[90181]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128212992 unmapped: 4907008 heap: 133120000 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:03.239301+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 5062656 heap: 133120000 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:45:34 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:04.239418+0000)
Nov 29 05:45:34 compute-0 ceph-osd[90181]: do_command 'log dump' '{prefix=log dump}'
Nov 29 05:45:35 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:45:35 compute-0 podman[277338]: 2025-11-29 05:45:35.013863253 +0000 UTC m=+0.068696854 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:45:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 05:45:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626940095' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 05:45:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 05:45:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1379584460' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 05:45:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 05:45:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1578630785' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 05:45:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 05:45:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347351187' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 05:45:35 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2520517162' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 05:45:35 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1626940095' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 05:45:35 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1379584460' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 05:45:35 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1578630785' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 05:45:35 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3347351187' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 05:45:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 05:45:36 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345632532' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 05:45:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 05:45:36 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660477510' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 05:45:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 05:45:36 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3540648910' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 05:45:36 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 05:45:36 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4083275423' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 05:45:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 05:45:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521307226' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 05:45:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 05:45:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1465502616' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 05:45:37 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3345632532' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 05:45:37 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1660477510' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 05:45:37 compute-0 ceph-mon[75176]: pgmap v1267: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:37 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3540648910' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 05:45:37 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/4083275423' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 05:45:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14631 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 05:45:37 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3520484038' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 05:45:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14635 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:37 compute-0 nova_compute[254898]: 2025-11-29 05:45:37.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:45:37 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14637 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:38 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2521307226' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 05:45:38 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1465502616' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 05:45:38 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3520484038' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 05:45:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:38 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14639 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:38 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14641 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:38 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14643 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14647 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:39 compute-0 podman[277834]: 2025-11-29 05:45:39.086215478 +0000 UTC m=+0.121019700 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 05:45:39 compute-0 ceph-mon[75176]: from='client.14631 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:39 compute-0 ceph-mon[75176]: from='client.14635 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:39 compute-0 ceph-mon[75176]: from='client.14637 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:39 compute-0 ceph-mon[75176]: pgmap v1268: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:39 compute-0 ceph-mon[75176]: from='client.14639 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 05:45:39 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2214700980' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:12:59.998842+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 1286144 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:00.998994+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68911104 unmapped: 1286144 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:01.999166+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 1277952 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:02.999306+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 1277952 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 828073 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:03.999417+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.080554962s of 16.093553543s, submitted: 4
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68919296 unmapped: 1277952 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:04.999562+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:34.195051+0000 osd.0 (osd.0) 134 : cluster [DBG] 8.b scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:34.209169+0000 osd.0 (osd.0) 135 : cluster [DBG] 8.b scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 135) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:34.195051+0000 osd.0 (osd.0) 134 : cluster [DBG] 8.b scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:34.209169+0000 osd.0 (osd.0) 135 : cluster [DBG] 8.b scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 1269760 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:05.999786+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68927488 unmapped: 1269760 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:06.999945+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:36.146579+0000 osd.0 (osd.0) 136 : cluster [DBG] 11.1 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:36.160690+0000 osd.0 (osd.0) 137 : cluster [DBG] 11.1 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 137) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:36.146579+0000 osd.0 (osd.0) 136 : cluster [DBG] 11.1 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:36.160690+0000 osd.0 (osd.0) 137 : cluster [DBG] 11.1 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 1261568 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:08.000135+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 1261568 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 830368 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:09.000321+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 1261568 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:10.000498+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68943872 unmapped: 1253376 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:11.000683+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:40.125808+0000 osd.0 (osd.0) 138 : cluster [DBG] 7.6 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:40.140002+0000 osd.0 (osd.0) 139 : cluster [DBG] 7.6 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 139) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:40.125808+0000 osd.0 (osd.0) 138 : cluster [DBG] 7.6 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:40.140002+0000 osd.0 (osd.0) 139 : cluster [DBG] 7.6 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68960256 unmapped: 1236992 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:12.001293+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:41.128158+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.f scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:41.149378+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.f scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 141) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:41.128158+0000 osd.0 (osd.0) 140 : cluster [DBG] 8.f scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:41.149378+0000 osd.0 (osd.0) 141 : cluster [DBG] 8.f scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1220608 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:13.001504+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68984832 unmapped: 1212416 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833809 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:14.001688+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:43.041934+0000 osd.0 (osd.0) 142 : cluster [DBG] 7.4 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:43.055992+0000 osd.0 (osd.0) 143 : cluster [DBG] 7.4 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 143) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:43.041934+0000 osd.0 (osd.0) 142 : cluster [DBG] 7.4 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:43.055992+0000 osd.0 (osd.0) 143 : cluster [DBG] 7.4 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 68993024 unmapped: 1204224 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:15.001889+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:44.081660+0000 osd.0 (osd.0) 144 : cluster [DBG] 3.c scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:44.095809+0000 osd.0 (osd.0) 145 : cluster [DBG] 3.c scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.801441193s of 10.841829300s, submitted: 12
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 145) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:44.081660+0000 osd.0 (osd.0) 144 : cluster [DBG] 3.c scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:44.095809+0000 osd.0 (osd.0) 145 : cluster [DBG] 3.c scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69009408 unmapped: 1187840 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:16.002102+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:45.037044+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.4 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:45.051107+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.4 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 147) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:45.037044+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.4 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:45.051107+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.4 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69017600 unmapped: 1179648 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:17.002328+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:46.012931+0000 osd.0 (osd.0) 148 : cluster [DBG] 7.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:46.027041+0000 osd.0 (osd.0) 149 : cluster [DBG] 7.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 149) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:46.012931+0000 osd.0 (osd.0) 148 : cluster [DBG] 7.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:46.027041+0000 osd.0 (osd.0) 149 : cluster [DBG] 7.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 1171456 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:18.002584+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 1 last_log 150 sent 149 num 1 unsent 1 sending 1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:48.001228+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.6 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 150) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:48.001228+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.6 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 1171456 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838398 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:19.002907+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 1 last_log 151 sent 150 num 1 unsent 1 sending 1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:48.015299+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.6 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 151) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:48.015299+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.6 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:20.003134+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69033984 unmapped: 1163264 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:21.003350+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:50.031539+0000 osd.0 (osd.0) 152 : cluster [DBG] 3.f scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:50.045592+0000 osd.0 (osd.0) 153 : cluster [DBG] 3.f scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 153) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:50.031539+0000 osd.0 (osd.0) 152 : cluster [DBG] 3.f scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:50.045592+0000 osd.0 (osd.0) 153 : cluster [DBG] 3.f scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69042176 unmapped: 1155072 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:22.003617+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:51.975828+0000 osd.0 (osd.0) 154 : cluster [DBG] 11.6 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:51.989917+0000 osd.0 (osd.0) 155 : cluster [DBG] 11.6 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 155) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:51.975828+0000 osd.0 (osd.0) 154 : cluster [DBG] 11.6 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:51.989917+0000 osd.0 (osd.0) 155 : cluster [DBG] 11.6 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69050368 unmapped: 1146880 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:23.003803+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:52.944705+0000 osd.0 (osd.0) 156 : cluster [DBG] 11.19 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:52.958854+0000 osd.0 (osd.0) 157 : cluster [DBG] 11.19 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 157) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:52.944705+0000 osd.0 (osd.0) 156 : cluster [DBG] 11.19 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:52.958854+0000 osd.0 (osd.0) 157 : cluster [DBG] 11.19 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841842 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:24.004037+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:25.004185+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:54.934016+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.1a scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:54.948122+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.1a scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 159) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:54.934016+0000 osd.0 (osd.0) 158 : cluster [DBG] 8.1a scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:54.948122+0000 osd.0 (osd.0) 159 : cluster [DBG] 8.1a scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69058560 unmapped: 1138688 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:26.004315+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.873671532s of 11.927726746s, submitted: 14
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:27.004488+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:56.964711+0000 osd.0 (osd.0) 160 : cluster [DBG] 3.12 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:13:56.978695+0000 osd.0 (osd.0) 161 : cluster [DBG] 3.12 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 161) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:56.964711+0000 osd.0 (osd.0) 160 : cluster [DBG] 3.12 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:13:56.978695+0000 osd.0 (osd.0) 161 : cluster [DBG] 3.12 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:28.004740+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69066752 unmapped: 1130496 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844138 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:29.004861+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:30.005031+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69074944 unmapped: 1122304 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:31.005193+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69083136 unmapped: 1114112 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:32.005426+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:33.005559+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69091328 unmapped: 1105920 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844138 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:34.005697+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1097728 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:35.005897+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:04.057179+0000 osd.0 (osd.0) 162 : cluster [DBG] 3.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:04.071250+0000 osd.0 (osd.0) 163 : cluster [DBG] 3.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 163) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:04.057179+0000 osd.0 (osd.0) 162 : cluster [DBG] 3.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:04.071250+0000 osd.0 (osd.0) 163 : cluster [DBG] 3.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 1081344 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:36.006117+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 4 last_log 167 sent 163 num 4 unsent 4 sending 4
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:05.027148+0000 osd.0 (osd.0) 164 : cluster [DBG] 8.18 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:05.041233+0000 osd.0 (osd.0) 165 : cluster [DBG] 8.18 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:05.987164+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.1d scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:06.001298+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.1d scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 167) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:05.027148+0000 osd.0 (osd.0) 164 : cluster [DBG] 8.18 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:05.041233+0000 osd.0 (osd.0) 165 : cluster [DBG] 8.18 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:05.987164+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.1d scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:06.001298+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.1d scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1073152 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:37.006348+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:38.006487+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 847581 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:39.006638+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69132288 unmapped: 1064960 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.15 deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.996809959s of 13.023008347s, submitted: 8
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.15 deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:40.006787+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:09.987745+0000 osd.0 (osd.0) 168 : cluster [DBG] 3.15 deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:10.001800+0000 osd.0 (osd.0) 169 : cluster [DBG] 3.15 deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 169) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:09.987745+0000 osd.0 (osd.0) 168 : cluster [DBG] 3.15 deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:10.001800+0000 osd.0 (osd.0) 169 : cluster [DBG] 3.15 deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:41.007004+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69148672 unmapped: 1048576 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:42.007165+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1040384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:43.007417+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1040384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848729 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:44.007610+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1040384 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:45.007756+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:14.983450+0000 osd.0 (osd.0) 170 : cluster [DBG] 3.17 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:14.997435+0000 osd.0 (osd.0) 171 : cluster [DBG] 3.17 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 171) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:14.983450+0000 osd.0 (osd.0) 170 : cluster [DBG] 3.17 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:14.997435+0000 osd.0 (osd.0) 171 : cluster [DBG] 3.17 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69165056 unmapped: 1032192 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:46.008069+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1024000 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:47.008239+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:48.008333+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849877 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:49.008492+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1015808 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:50.008667+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 999424 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:51.008836+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 999424 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:52.009048+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 999424 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:53.009360+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.113968849s of 13.130171776s, submitted: 4
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 991232 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 851025 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:54.010082+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:23.117949+0000 osd.0 (osd.0) 172 : cluster [DBG] 8.1f deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:23.132060+0000 osd.0 (osd.0) 173 : cluster [DBG] 8.1f deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 173) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:23.117949+0000 osd.0 (osd.0) 172 : cluster [DBG] 8.1f deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:23.132060+0000 osd.0 (osd.0) 173 : cluster [DBG] 8.1f deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69214208 unmapped: 983040 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:55.010798+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:56.011314+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:25.027665+0000 osd.0 (osd.0) 174 : cluster [DBG] 7.13 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:25.041689+0000 osd.0 (osd.0) 175 : cluster [DBG] 7.13 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 175) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:25.027665+0000 osd.0 (osd.0) 174 : cluster [DBG] 7.13 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:25.041689+0000 osd.0 (osd.0) 175 : cluster [DBG] 7.13 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69222400 unmapped: 974848 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:57.011992+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:58.012460+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852173 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:13:59.012592+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69230592 unmapped: 966656 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:00.012759+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69238784 unmapped: 958464 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:01.012869+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69238784 unmapped: 958464 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:02.013041+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69238784 unmapped: 958464 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:03.013168+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852173 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:04.013319+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 950272 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:05.013542+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:06.013680+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.830239296s of 13.843114853s, submitted: 4
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:07.013886+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:36.961093+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.1b scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:36.985820+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.1b scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 177) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:36.961093+0000 osd.0 (osd.0) 176 : cluster [DBG] 9.1b scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:36.985820+0000 osd.0 (osd.0) 177 : cluster [DBG] 9.1b scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69255168 unmapped: 942080 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:08.014143+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 933888 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853321 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:09.014352+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69263360 unmapped: 933888 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:10.014582+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 925696 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:11.014709+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 917504 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:12.014918+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:41.926509+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.1 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:41.968855+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.1 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 179) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:41.926509+0000 osd.0 (osd.0) 178 : cluster [DBG] 9.1 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:41.968855+0000 osd.0 (osd.0) 179 : cluster [DBG] 9.1 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 909312 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:13.015120+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:42.926455+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.11 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:42.961626+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.11 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 181) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:42.926455+0000 osd.0 (osd.0) 180 : cluster [DBG] 9.11 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:42.961626+0000 osd.0 (osd.0) 181 : cluster [DBG] 9.11 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69304320 unmapped: 892928 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 855616 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:14.015351+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69304320 unmapped: 892928 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:15.015585+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69312512 unmapped: 884736 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:16.015713+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69312512 unmapped: 884736 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:17.015898+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69312512 unmapped: 884736 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:18.016048+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.965335846s of 11.990801811s, submitted: 6
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69320704 unmapped: 876544 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856763 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:19.016202+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:48.951962+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.d scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:48.994298+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.d scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 183) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:48.951962+0000 osd.0 (osd.0) 182 : cluster [DBG] 9.d scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:48.994298+0000 osd.0 (osd.0) 183 : cluster [DBG] 9.d scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.b deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.b deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:20.016506+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:49.967704+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.b deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:49.999480+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.b deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 185) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:49.967704+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.b deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:49.999480+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.b deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69328896 unmapped: 868352 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:21.016699+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 860160 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:22.016896+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 860160 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:23.017064+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 851968 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859057 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:24.017253+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:53.925224+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:14:53.963968+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 187) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:53.925224+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:14:53.963968+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 851968 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:25.017710+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 860160 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:26.017841+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 860160 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:27.017981+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 860160 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:28.018149+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 851968 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859057 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:29.018299+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69345280 unmapped: 851968 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:30.018503+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.632668495s of 11.965296745s, submitted: 6
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 843776 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:31.018709+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:00.917308+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.5 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:00.956081+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.5 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 189) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:00.917308+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.5 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:00.956081+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.5 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 827392 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:32.019082+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:01.962246+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.3 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:02.008144+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.3 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 191) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:01.962246+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.3 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:02.008144+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.3 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 827392 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:33.019301+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 1 last_log 192 sent 191 num 1 unsent 1 sending 1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:03.003856+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.1d scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 192) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:03.003856+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.1d scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 827392 heap: 70197248 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 862499 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:34.019580+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 1 last_log 193 sent 192 num 1 unsent 1 sending 1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:03.039141+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.1d scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 193) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:03.039141+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.1d scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:35.019785+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:04.993681+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.3 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:05.014852+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.3 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 195) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:04.993681+0000 osd.0 (osd.0) 194 : cluster [DBG] 6.3 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:05.014852+0000 osd.0 (osd.0) 195 : cluster [DBG] 6.3 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:36.019995+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 802816 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:37.020169+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:06.992022+0000 osd.0 (osd.0) 196 : cluster [DBG] 6.7 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:07.009590+0000 osd.0 (osd.0) 197 : cluster [DBG] 6.7 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 1835008 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 197) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:06.992022+0000 osd.0 (osd.0) 196 : cluster [DBG] 6.7 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:07.009590+0000 osd.0 (osd.0) 197 : cluster [DBG] 6.7 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:38.020363+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 1835008 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:39.020535+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69410816 unmapped: 1835008 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 864793 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.5 deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.5 deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:40.020744+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:09.034715+0000 osd.0 (osd.0) 198 : cluster [DBG] 6.5 deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:09.056117+0000 osd.0 (osd.0) 199 : cluster [DBG] 6.5 deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 1818624 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 199) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:09.034715+0000 osd.0 (osd.0) 198 : cluster [DBG] 6.5 deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:09.056117+0000 osd.0 (osd.0) 199 : cluster [DBG] 6.5 deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:41.021026+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 1818624 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:42.021324+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69427200 unmapped: 1818624 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.965991974s of 12.007235527s, submitted: 12
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:43.021534+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:12.924486+0000 osd.0 (osd.0) 200 : cluster [DBG] 6.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:12.938622+0000 osd.0 (osd.0) 201 : cluster [DBG] 6.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 1802240 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 201) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:12.924486+0000 osd.0 (osd.0) 200 : cluster [DBG] 6.9 scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:12.938622+0000 osd.0 (osd.0) 201 : cluster [DBG] 6.9 scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:44.021729+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 1802240 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 867087 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:45.021880+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69443584 unmapped: 1802240 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:46.021999+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:15.939903+0000 osd.0 (osd.0) 202 : cluster [DBG] 6.a scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:15.954005+0000 osd.0 (osd.0) 203 : cluster [DBG] 6.a scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1785856 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 203) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:15.939903+0000 osd.0 (osd.0) 202 : cluster [DBG] 6.a scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:15.954005+0000 osd.0 (osd.0) 203 : cluster [DBG] 6.a scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.16 deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.16 deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:47.022212+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:16.959790+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.16 deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:16.995061+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.16 deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69459968 unmapped: 1785856 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 205) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:16.959790+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.16 deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:16.995061+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.16 deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:48.022453+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69468160 unmapped: 1777664 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:49.022627+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:18.959178+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.1c scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:18.997964+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.1c scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 1769472 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 870530 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 207) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:18.959178+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.1c scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:18.997964+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.1c scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:50.022820+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 1761280 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:51.023095+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69476352 unmapped: 1769472 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:52.023337+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 1 last_log 208 sent 207 num 1 unsent 1 sending 1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:22.007168+0000 osd.0 (osd.0) 208 : cluster [DBG] 9.1e deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69484544 unmapped: 1761280 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 208) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:22.007168+0000 osd.0 (osd.0) 208 : cluster [DBG] 9.1e deep-scrub starts
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:53.023843+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  log_queue is 1 last_log 209 sent 208 num 1 unsent 1 sending 1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  will send 2025-11-29T05:15:22.042467+0000 osd.0 (osd.0) 209 : cluster [DBG] 9.1e deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1753088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client handle_log_ack log(last 209) v1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: log_client  logged 2025-11-29T05:15:22.042467+0000 osd.0 (osd.0) 209 : cluster [DBG] 9.1e deep-scrub ok
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:54.024072+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 1753088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:55.024218+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1744896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:56.024339+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1744896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:57.024545+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69500928 unmapped: 1744896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:58.024711+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1736704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:14:59.024863+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1736704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:00.025005+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69509120 unmapped: 1736704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:01.025145+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69517312 unmapped: 1728512 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:02.025324+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1720320 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:03.025489+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69525504 unmapped: 1720320 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:04.025696+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1712128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:05.025893+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1712128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:06.026033+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69533696 unmapped: 1712128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:07.026190+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1703936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:08.026337+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69541888 unmapped: 1703936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:09.026503+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 1695744 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:10.027131+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 1695744 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:11.027325+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69550080 unmapped: 1695744 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:12.027455+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1679360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:13.027579+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69566464 unmapped: 1679360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:14.027692+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1671168 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:15.027819+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1671168 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:16.027959+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69574656 unmapped: 1671168 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:17.028123+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1662976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:18.028301+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1662976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:19.028443+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1662976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:20.028558+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1654784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:21.028682+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 1662976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:22.028854+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1654784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:23.028981+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1654784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:24.029197+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 1654784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:25.029324+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69599232 unmapped: 1646592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:26.029448+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1638400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:27.029598+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69607424 unmapped: 1638400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:28.029953+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1630208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:29.030225+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1630208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:30.030413+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69615616 unmapped: 1630208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:31.030841+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1613824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:32.031059+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1613824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:33.031198+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69632000 unmapped: 1613824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:34.031366+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1605632 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:35.031537+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69640192 unmapped: 1605632 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:36.031677+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 1597440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:37.031983+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1589248 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:38.032190+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69656576 unmapped: 1589248 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:39.032510+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69664768 unmapped: 1581056 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:40.032678+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 1572864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:41.032845+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69689344 unmapped: 1556480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:42.033056+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1548288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:43.033249+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 1548288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:44.033472+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 1540096 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:45.033592+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 1540096 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:46.033752+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69713920 unmapped: 1531904 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:47.033954+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1523712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:48.034085+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 1523712 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:49.034317+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 1515520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:50.034430+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 1515520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:51.034548+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 1515520 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:52.034743+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 1507328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:53.034905+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 1507328 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:54.035059+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 1499136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:55.035222+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69746688 unmapped: 1499136 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:56.035493+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69763072 unmapped: 1482752 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:57.035614+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:58.035749+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:15:59.036037+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69771264 unmapped: 1474560 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:00.036338+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1466368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:01.036513+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1466368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:02.037180+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69779456 unmapped: 1466368 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:03.037397+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:04.037592+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:05.037767+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69787648 unmapped: 1458176 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:06.037989+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1441792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:07.038130+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69804032 unmapped: 1441792 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:08.038315+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:09.038493+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:10.038658+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69812224 unmapped: 1433600 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:11.038905+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1409024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:12.039149+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69836800 unmapped: 1409024 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:13.039370+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1400832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:14.039511+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1400832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:15.039642+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 1400832 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:16.039826+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:17.039959+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:18.040092+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:19.040233+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69853184 unmapped: 1392640 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:20.040330+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1384448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:21.040509+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1384448 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:22.040674+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:23.040806+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:24.040963+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:25.041119+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:26.041373+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1376256 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:27.041534+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:28.041660+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1368064 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:29.041890+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:30.042503+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69885952 unmapped: 1359872 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:31.042844+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1351680 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:32.043353+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1343488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:33.043478+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1343488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:34.043702+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1343488 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:35.043874+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1335296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:36.044040+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1335296 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:37.044187+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1327104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:38.044364+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1327104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:39.044511+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69918720 unmapped: 1327104 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:40.044835+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1318912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:41.045002+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1318912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:42.045192+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1318912 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:43.045355+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1310720 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:44.045469+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1310720 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:45.045622+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1310720 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:46.045831+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1302528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:47.045995+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1302528 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:48.046142+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:49.046301+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:50.046444+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1294336 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:51.046610+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1286144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:52.046807+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1286144 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:53.046995+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 1277952 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:54.047130+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 1277952 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:55.047324+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1269760 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:56.047531+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:57.047661+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1253376 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:58.047831+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:16:59.047991+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:00.048141+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1245184 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:01.048345+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1236992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:02.048498+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1236992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:03.048635+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1236992 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:04.048756+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:05.048879+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:06.048994+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70017024 unmapped: 1228800 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:07.049103+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1220608 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:08.049293+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70025216 unmapped: 1220608 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:09.049470+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70033408 unmapped: 1212416 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:10.049593+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1204224 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:11.049710+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1187840 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:12.049838+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:13.050008+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1179648 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:14.050203+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1171456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:15.050374+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1171456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:16.050521+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70074368 unmapped: 1171456 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:17.050680+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:18.050886+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:19.051076+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70082560 unmapped: 1163264 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:20.051295+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1155072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:21.051453+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1155072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:22.051657+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1155072 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:23.051796+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1146880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:24.051959+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1146880 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:25.053104+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:26.053287+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:27.053435+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1138688 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:28.053565+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:29.053740+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:30.053857+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70115328 unmapped: 1130496 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:31.054002+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:32.054149+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:33.054290+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1114112 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:34.054578+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:35.054788+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:36.054910+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:37.055080+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:38.055307+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1105920 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:39.055483+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1097728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:40.055659+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1097728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:41.055816+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1097728 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:42.056201+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:43.056348+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1089536 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:44.056458+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:45.056659+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:46.056828+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1081344 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:47.056971+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1073152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:48.057101+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1073152 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:49.057240+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:50.057325+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:51.057483+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70180864 unmapped: 1064960 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:52.057883+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1056768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:53.058028+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1056768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:54.058187+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1056768 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:55.058345+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:56.058475+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1048576 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:57.058597+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:58.058777+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:17:59.058920+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1040384 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:00.059086+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:01.059232+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:02.059495+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1032192 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:03.059660+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:04.059802+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:05.059928+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70221824 unmapped: 1024000 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:06.060109+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:07.060255+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1015808 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:08.060409+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:09.060558+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1007616 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:10.060696+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:11.060864+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:12.061035+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70254592 unmapped: 991232 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:13.061172+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:14.061335+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:15.061445+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 983040 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:16.061570+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:17.061755+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:18.061897+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:19.061996+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:20.062100+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:21.062189+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:22.062322+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:23.062431+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:24.062597+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:25.062735+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:26.062867+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:27.062985+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:28.063146+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:29.063335+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:30.063472+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:31.063580+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:32.063699+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:33.063828+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:34.063981+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 925696 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:35.064156+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:36.064320+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 917504 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:37.064471+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:38.064634+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:39.064849+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70336512 unmapped: 909312 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:40.064999+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:41.065178+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 901120 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:42.065449+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 892928 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:43.065557+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 892928 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:44.065679+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:45.065972+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 884736 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:46.066192+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 868352 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:47.066323+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 868352 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:48.066461+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:49.066727+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:50.066923+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 860160 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:51.067063+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 851968 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:52.067226+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 851968 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:53.067405+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 851968 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:54.067603+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:55.067752+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70402048 unmapped: 843776 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:56.067930+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 835584 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:57.068160+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70410240 unmapped: 835584 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:58.068330+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 827392 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:18:59.068474+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 827392 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:00.068650+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 827392 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:01.068857+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 819200 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:02.069099+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 811008 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:03.069245+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 811008 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:04.069390+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 802816 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:05.069651+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 802816 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:06.069808+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 802816 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:07.069951+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 802816 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5568 writes, 24K keys, 5568 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5568 writes, 870 syncs, 6.40 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5568 writes, 24K keys, 5568 commit groups, 1.0 writes per commit group, ingest: 18.63 MB, 0.03 MB/s
                                           Interval WAL: 5568 writes, 870 syncs, 6.40 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:08.070199+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:09.070341+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:10.070462+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 729088 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:11.070629+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 720896 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:12.070837+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 712704 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:13.070952+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 704512 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:14.071069+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70541312 unmapped: 704512 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:15.071219+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 696320 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:16.071353+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 696320 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:17.071490+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 688128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:18.071596+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 688128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:19.071739+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 688128 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:20.071849+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 679936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:21.071963+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 679936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:22.072154+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 679936 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:23.072329+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 671744 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:24.072454+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 671744 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:25.072624+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70574080 unmapped: 671744 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:26.072737+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 663552 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:27.072864+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70582272 unmapped: 663552 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:28.072975+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:29.073085+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:30.073202+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 655360 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:31.073303+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 647168 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:32.073438+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 647168 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:33.073551+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 647168 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:34.073676+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:35.073819+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:36.073936+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 638976 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:37.074090+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 630784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:38.074218+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 630784 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:39.074415+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:40.074604+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:41.074760+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 622592 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:42.074976+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:43.075299+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 614400 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:44.076018+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 606208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:45.076455+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 606208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:46.076781+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 606208 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:47.076957+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:48.077453+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 598016 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:49.077642+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:50.077811+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 589824 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:51.077945+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 581632 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:52.078102+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:53.078302+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:54.078486+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 573440 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:55.078645+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 557056 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:56.078819+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 557056 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:57.078985+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 557056 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:58.079174+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:19:59.079382+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 548864 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:00.079508+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:01.079693+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:02.079960+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:03.080118+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 540672 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:04.080245+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:05.080409+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:06.080580+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:07.080771+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 524288 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:08.080983+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 516096 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:09.081118+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 516096 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:10.081233+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 327.332214355s of 327.367889404s, submitted: 10
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 70713344 unmapped: 532480 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:11.081303+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 1007616 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:12.081470+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:13.081667+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:14.081837+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:15.081958+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 991232 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:16.082078+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:17.082315+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 983040 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:18.082435+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 974848 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:19.082601+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 974848 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:20.082738+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 974848 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:21.082887+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 966656 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:22.083038+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 966656 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:23.083179+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:24.083318+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 958464 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:25.083451+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 950272 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:26.083602+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 950272 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:27.083722+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 950272 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:28.083839+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:29.083959+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:30.084098+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 942080 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:31.084307+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:32.084513+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 933888 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:33.084778+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:34.084922+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:35.085087+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:36.085299+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:37.085418+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:38.085585+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:39.085770+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:40.085970+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:41.086147+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:42.086354+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:43.086497+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:44.086669+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:45.086796+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:46.086962+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:47.087146+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:48.087289+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:49.087435+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:50.087645+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:51.087871+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:52.088129+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:53.088522+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 851968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:54.088679+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 851968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:55.088792+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 843776 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:56.088903+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:57.089039+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:58.089202+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:59.089385+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:00.089569+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:01.089737+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 819200 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:02.089970+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 819200 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:03.090144+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 819200 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:04.090346+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:05.090483+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 802816 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:06.090619+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 802816 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:07.090746+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 802816 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:08.090905+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 794624 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:09.091092+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 794624 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:10.091249+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 794624 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:11.091418+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:12.091558+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:13.091746+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:14.091889+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:15.092060+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:16.092204+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:17.092342+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:18.092473+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:19.092646+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:20.092772+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:21.092900+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:22.093070+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:23.093201+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:24.093326+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:25.093444+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:26.093586+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:27.093769+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:28.093922+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:29.094161+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:30.094313+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:31.094449+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:32.094627+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:33.094764+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:34.094874+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:35.094985+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 753664 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:36.095337+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 753664 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:37.095457+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 753664 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:38.095606+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 753664 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:39.095741+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 753664 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:40.095896+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:41.096098+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:42.096333+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:43.096458+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:44.096584+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:45.096693+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:46.096811+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:47.096964+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:48.097096+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:49.097229+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:50.097315+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:51.097428+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 737280 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:52.097566+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 737280 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:53.097758+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 737280 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:54.097927+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 737280 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:55.098100+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:56.098355+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:57.098493+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:58.098689+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:59.098797+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:00.098939+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:01.099083+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:02.099416+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:03.099544+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:04.099716+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:05.099825+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:06.100030+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:07.100144+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:08.100279+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:09.100427+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:10.100540+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:11.100683+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:12.100900+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:13.101042+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:14.101233+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:15.102783+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:16.102907+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:17.103292+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:18.103557+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:19.103692+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:20.103870+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:21.103993+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:22.104138+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:23.104362+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:24.104691+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:25.104851+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:26.105038+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:27.105186+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:28.106179+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:29.106919+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:30.107482+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:31.107833+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:32.108014+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:33.108192+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:34.108509+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:35.108623+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:36.108835+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:37.108953+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:38.109068+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:39.109203+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:40.109342+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:41.109557+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:42.109733+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:43.109872+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:44.109989+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:45.110247+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:46.110448+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:47.110661+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:48.110874+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:49.111086+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:50.111236+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:51.111330+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:52.111478+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:53.111640+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:54.111883+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:55.112049+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:56.112171+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:57.112305+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:58.112491+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:59.113970+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:00.114193+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:01.114386+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:02.114576+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:03.114686+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:04.114836+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:05.114952+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:06.115071+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:07.115208+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:08.115410+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:09.115589+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:10.115763+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:11.116002+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:12.116229+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:13.116402+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:14.116519+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:15.116667+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:16.116823+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:17.116974+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:18.117137+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:19.117330+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:20.117470+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:21.117615+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:22.117807+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:23.117964+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:24.118120+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:25.118254+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:26.118397+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:27.118615+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:28.118779+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:29.118953+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:30.119077+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:31.119206+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:32.119385+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:33.119525+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:34.119807+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:35.120010+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:36.120196+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:37.120348+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:38.120571+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:39.120784+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:40.120983+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:41.121207+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:42.121532+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:43.121777+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:44.122065+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:45.123301+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:46.123448+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:47.123603+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:48.123772+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:49.123905+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:50.124025+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:51.124157+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:52.124322+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:53.124447+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:54.124618+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:55.124764+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:56.124946+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:57.125078+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:58.125232+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:59.125367+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:00.125608+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:01.125716+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:02.125895+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:03.126016+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:04.126155+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:05.126381+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:06.126633+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:07.127055+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:08.127383+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:09.127652+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:10.127876+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:11.128046+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:12.128256+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 647168 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:13.128485+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 647168 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:14.128598+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 647168 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc ms_handle_reset ms_handle_reset con 0x55c4e689dc00
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: get_auth_request con 0x55c4e7e6b400 auth_method 0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_configure stats_period=5
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:15.128712+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:16.128858+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:17.128989+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:18.129112+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:19.129227+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 ms_handle_reset con 0x55c4e74b6400 session 0x55c4e6831680
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e72bec00
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:20.129359+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:21.129573+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:22.129788+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:23.129981+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:24.130209+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:25.130397+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:26.130532+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:27.130697+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:28.130858+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:29.131000+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:30.131216+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:31.131446+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:32.131722+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:33.131963+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:34.132148+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:35.132355+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:36.132540+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:37.132740+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:38.132968+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:39.133143+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:40.133370+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:41.133672+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:42.133825+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:43.133999+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:44.134182+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:45.134297+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:46.134391+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:47.134494+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:48.134689+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:49.134919+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:50.135094+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:51.135249+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:52.135477+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:53.135600+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:54.135776+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:55.135898+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:56.136176+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:57.136337+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:58.136536+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:59.136707+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:00.136866+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:01.137017+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:02.137175+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:03.137334+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:04.137515+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:05.137664+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:06.137899+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:07.138066+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:08.138202+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:09.138322+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:10.138510+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:11.138649+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:12.138808+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:13.138993+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:14.139215+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:15.139310+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:16.139434+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:17.139558+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:18.139672+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:19.139813+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:20.139964+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:21.140117+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:22.140307+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:23.140475+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:24.140626+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:25.140756+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:26.140960+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:27.141134+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:28.141281+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:29.141459+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:30.141602+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:31.141733+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:32.142344+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:33.142479+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:34.142668+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:35.142818+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:36.142964+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:37.143130+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:38.143275+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:39.143452+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:40.143633+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:41.143787+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:42.144031+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:43.144211+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:44.144349+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:45.144481+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:46.144656+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:47.144812+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:48.146048+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:49.147339+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:50.147466+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:51.147598+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:52.147793+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:53.147927+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:54.148095+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:55.148244+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:56.148442+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:57.148595+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:58.148677+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:59.148837+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:00.149009+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:01.149178+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:02.149353+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:03.149461+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:04.149618+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:05.149770+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:06.150182+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:07.150341+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:08.150480+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:09.150592+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:10.150705+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:11.150816+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:12.150961+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:13.151144+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:14.151291+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:15.151435+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:16.151547+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:17.151738+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:18.152071+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:19.152212+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:20.152365+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:21.152498+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:22.152640+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:23.152752+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:24.152891+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:25.153019+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:26.153140+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:27.522051+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:28.522378+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:29.522548+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:30.522699+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:31.522848+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:32.523020+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:33.523152+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:34.523321+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:35.523445+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:36.523587+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:37.523711+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:38.523854+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:39.524004+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:40.524142+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:41.524304+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:42.524533+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:43.524701+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:44.524890+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:45.525032+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:46.525189+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:47.525336+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:48.525506+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:49.525647+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:50.525826+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:51.526015+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:52.526231+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:53.526346+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:54.526535+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:55.526663+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:56.526803+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:57.526939+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:58.527104+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:59.527255+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:00.527412+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:01.527553+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:02.527912+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:03.528079+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:04.528259+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:05.528467+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:06.528689+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:07.528841+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:08.529033+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:09.529193+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:10.529344+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:11.529508+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:12.529695+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:13.529838+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:14.529964+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:15.530107+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:16.530282+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:17.530495+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:18.530752+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:19.531002+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:20.531905+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:21.532665+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:22.532891+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:23.533051+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:24.533235+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:25.533417+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:26.533541+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:27.533663+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:28.533814+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:29.533956+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:30.534077+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:31.534207+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:32.534367+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:33.534474+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:34.534600+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:35.534747+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:36.534962+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:37.535099+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:38.535328+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:39.535472+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:40.535686+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:41.535822+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:42.535994+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:43.536164+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:44.536351+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:45.536488+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:46.536704+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:47.536829+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:48.537006+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:49.537192+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:50.537364+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:51.537545+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:52.538343+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:53.538490+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:54.538743+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:55.538885+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:56.539020+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:57.539215+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:58.539344+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:59.539543+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:00.539748+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:01.539924+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:02.540204+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:03.540362+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:04.540577+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:05.540698+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:06.540884+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:07.541014+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:08.541158+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:09.541349+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:10.541526+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 172032 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:11.541700+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:12.541927+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:13.542105+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:14.542300+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:15.542475+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:16.542673+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:17.542811+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:18.542979+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:19.543176+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:20.543352+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:21.543472+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:22.543646+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:23.543790+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:24.880044+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:25.881249+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 172032 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:26.882032+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 172032 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:27.882308+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 172032 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:28.882557+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 172032 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:29.882780+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:30.882930+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:31.883099+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:32.883332+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:33.883492+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:34.883695+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:35.883890+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:36.884168+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:37.884395+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:38.884584+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:39.884792+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:40.884981+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:41.885159+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:42.885382+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:43.885532+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:44.885665+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:45.885799+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:46.886005+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:47.886191+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:48.886349+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:49.886500+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:50.886674+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:51.886844+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:52.887090+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:53.887336+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:54.887540+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:55.888084+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:56.888236+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:57.888433+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:58.888574+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:59.888696+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:00.888833+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:01.888975+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:02.889134+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:03.889361+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:04.889560+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:05.889698+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:06.889858+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:07.890005+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5780 writes, 24K keys, 5780 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5780 writes, 976 syncs, 5.92 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 98304 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:08.890163+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 98304 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:09.890314+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 81920 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:10.890446+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 81920 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:11.890608+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 65536 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:12.890786+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 65536 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:13.890950+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 65536 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:14.891113+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:15.891315+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:16.891472+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:17.891668+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:18.891839+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:19.891966+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:20.892092+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:21.892222+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:22.892397+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:23.892523+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:24.892689+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:25.892844+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:26.892976+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:27.893116+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:28.893221+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:29.893338+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:30.893576+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:31.893702+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:32.893886+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:33.894015+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:34.894412+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:35.894626+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:36.894799+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:37.895204+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:38.895410+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:39.895582+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:40.895754+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:41.895931+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:42.896115+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:43.896346+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:44.896533+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:45.896721+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:46.896914+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:47.897058+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:48.897233+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:49.897399+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:50.897570+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:51.897778+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:52.898012+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:53.898175+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:54.898358+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:55.898531+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:56.898723+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:57.898869+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:58.899078+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:59.899375+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:00.899562+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:01.899757+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:02.900007+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:03.900176+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:04.900358+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:05.900514+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:06.900674+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:07.900876+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:08.901050+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:09.901215+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.866027832s of 600.168090820s, submitted: 106
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:10.901421+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 139264 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:11.901541+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:12.901758+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:13.901939+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:14.902092+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:15.902337+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:16.902526+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:17.902697+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:18.902842+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:19.902987+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:20.903140+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:21.903339+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:22.903573+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:23.903806+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:24.903978+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:25.904127+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:26.904315+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:27.904495+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:28.904746+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:29.904967+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:30.905156+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:31.905319+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:32.905487+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:33.905665+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:34.905857+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:35.905995+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:36.906156+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:37.906355+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:38.906518+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:39.906665+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:40.906830+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:41.906961+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:42.907137+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:43.907341+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:44.907475+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:45.907606+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:46.907790+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:47.907964+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:48.908146+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:49.908357+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:50.908521+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:51.908732+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:52.908910+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:53.909089+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:54.909255+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:55.909445+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:56.909575+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:57.909728+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:58.909856+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:59.910016+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:00.910175+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:01.910308+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:02.910460+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:03.910580+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:04.910749+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:05.910882+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:06.911012+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:07.911210+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:08.911340+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:09.911470+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:10.911618+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:11.911759+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:12.911973+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:13.912176+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:14.912389+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:15.912538+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:16.912734+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:17.912970+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:18.913104+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:19.913356+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:20.913504+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:21.913674+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:22.913866+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:23.914032+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:24.914197+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:25.914361+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:26.914515+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:27.914692+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:28.914967+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:29.915159+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:30.915377+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:31.915594+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:32.915805+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:33.915953+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:34.916160+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:35.916418+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:36.916595+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:37.916790+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:38.916977+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:39.917177+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:40.917335+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:41.917534+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:42.917685+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:43.917911+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:44.918124+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:45.918329+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:46.918468+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:47.918620+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:48.918828+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:49.918951+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:50.919071+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:51.919232+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:52.919505+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:53.919695+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:54.919830+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:55.919987+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:56.920134+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:57.920384+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:58.920579+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:59.920739+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:00.920950+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:01.921314+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:02.921674+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:03.921918+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:04.922122+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:05.922398+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:06.922690+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:07.922999+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:08.923230+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:09.923468+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:10.923640+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:11.923828+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:12.924103+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:13.924332+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:14.924552+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:15.924823+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:16.925326+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:17.925495+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:18.925684+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:19.926030+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:20.926251+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:21.926427+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:22.926674+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:23.926899+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:24.927245+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:25.927396+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:26.927913+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:27.928144+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:28.928412+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:29.928639+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:30.928801+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:31.928968+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:32.929230+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:33.929456+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:34.929683+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:35.929913+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:36.930039+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:37.930346+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:38.930573+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:39.930759+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:40.930939+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:41.931148+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:42.931390+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:43.931593+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:44.931791+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:45.931963+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:46.932148+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:47.932311+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:48.932491+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:49.932624+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:50.932754+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:51.932919+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:52.933118+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:53.933329+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:54.933473+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:55.933659+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:56.933891+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:57.934076+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:58.934205+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:59.934326+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:00.934485+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:01.934646+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:02.934836+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:03.934970+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:04.935156+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:05.935330+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:06.935515+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:07.935686+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:08.935891+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:09.936076+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:10.936313+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:11.936474+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:12.936693+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:13.936915+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:14.937303+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:15.937480+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:16.937672+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:17.937822+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:18.937983+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:19.938181+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:20.938388+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:21.938623+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:22.938854+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:23.938989+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:24.939142+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:25.939342+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:26.939477+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:27.939624+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:28.939793+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:29.939991+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:30.940133+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 120 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 201.120803833s of 201.376556396s, submitted: 106
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:31.940344+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e74b6400
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc2ac000/0x0/0x4ffc00000, data 0x8bafb4/0x972000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:32.940491+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 123 ms_handle_reset con 0x55c4e74b6400 session 0x55c4e957cb40
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938465 data_alloc: 218103808 data_used: 208896
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 17620992 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:33.940631+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e72be400
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 17588224 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc2a7000/0x0/0x4ffc00000, data 0x8bcb70/0x976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:34.940788+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 124 ms_handle_reset con 0x55c4e72be400 session 0x55c4e9cd3e00
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:35.940890+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:36.941079+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:37.941315+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975200 data_alloc: 218103808 data_used: 212992
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fbe33000/0x0/0x4ffc00000, data 0xd2e709/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:38.941490+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:39.941641+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:40.941770+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fbe33000/0x0/0x4ffc00000, data 0xd2e709/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 124 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:41.941975+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe33000/0x0/0x4ffc00000, data 0xd2e709/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:42.942159+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977166 data_alloc: 218103808 data_used: 212992
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:43.942335+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:44.942479+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:45.942620+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:46.942782+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:47.942917+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977166 data_alloc: 218103808 data_used: 212992
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:48.943082+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:49.943261+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:50.943448+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:51.943560+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:52.943722+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977166 data_alloc: 218103808 data_used: 212992
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:53.943841+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:54.944053+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 10
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:55.944507+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:56.944747+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:57.945007+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977326 data_alloc: 218103808 data_used: 217088
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:58.945192+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:59.945384+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:00.945560+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:01.945739+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:02.945932+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 11
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977326 data_alloc: 218103808 data_used: 217088
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.246669769s of 31.416391373s, submitted: 47
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:03.946054+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:04.946181+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:05.947622+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:06.948241+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:07.948390+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976654 data_alloc: 218103808 data_used: 217088
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:08.948530+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:09.948735+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:10.949115+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:11.949322+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:12.949504+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976670 data_alloc: 218103808 data_used: 217088
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:13.949633+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:14.949970+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:15.950231+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.018905640s of 13.032593727s, submitted: 5
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:16.950532+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:17.950800+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976638 data_alloc: 218103808 data_used: 217088
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:18.951031+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:19.951288+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:20.951418+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 17285120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:21.951673+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 17285120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14651 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:22.952021+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976622 data_alloc: 218103808 data_used: 217088
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 17285120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:23.952155+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 17285120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:24.952336+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 17285120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:25.952452+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e7211400
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:26.952671+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:27.952864+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.039279938s of 12.053936005s, submitted: 5
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978390 data_alloc: 218103808 data_used: 217088
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:28.952998+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:29.953113+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:30.953344+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:31.953508+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd30207/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:32.953723+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978390 data_alloc: 218103808 data_used: 217088
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:33.953908+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:34.954027+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0xd31ded/0xdf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:35.954158+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0xd31ded/0xdf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:36.954303+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0xd31ded/0xdf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:37.954454+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982212 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:38.954653+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0xd31ded/0xdf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:39.954840+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:40.955007+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:41.955164+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:42.955362+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982212 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:43.955491+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0xd31ded/0xdf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:44.955667+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.812093735s of 16.887153625s, submitted: 21
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:45.955775+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:46.955891+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 12
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:47.955997+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984498 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:48.956141+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:49.956278+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fbe2b000/0x0/0x4ffc00000, data 0xd33850/0xdf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:50.956377+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:51.956512+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:52.956842+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984530 data_alloc: 218103808 data_used: 225280
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:53.956961+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:54.957082+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.063793182s of 10.084918976s, submitted: 14
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fbe2b000/0x0/0x4ffc00000, data 0xd33850/0xdf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:55.957215+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:56.957334+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:57.957456+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986138 data_alloc: 218103808 data_used: 229376
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:58.957602+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fbe2a000/0x0/0x4ffc00000, data 0xd338eb/0xdf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:59.957760+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:00.957928+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:01.958074+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:02.958224+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987922 data_alloc: 218103808 data_used: 229376
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:03.958310+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fbe2a000/0x0/0x4ffc00000, data 0xd338eb/0xdf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 127 handle_osd_map epochs [128,128], i have 129, src has [1,128]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:04.958412+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.859433174s of 10.010948181s, submitted: 51
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fbe23000/0x0/0x4ffc00000, data 0xd370d7/0xdfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 17203200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:05.958522+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 17195008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:06.958671+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 17186816 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fbe20000/0x0/0x4ffc00000, data 0xd38d88/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:07.958799+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fbe20000/0x0/0x4ffc00000, data 0xd38d88/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003074 data_alloc: 218103808 data_used: 245760
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:08.958948+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:09.959082+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:10.959200+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:11.959321+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:12.959487+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005020 data_alloc: 218103808 data_used: 245760
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fbe1b000/0x0/0x4ffc00000, data 0xd3c509/0xe03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:13.959624+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:14.959785+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:15.959919+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 132 handle_osd_map epochs [133,134], i have 132, src has [1,134]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.698640823s of 10.893690109s, submitted: 71
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:16.960050+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fbe15000/0x0/0x4ffc00000, data 0xd3fae7/0xe08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:17.960216+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011540 data_alloc: 218103808 data_used: 253952
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:18.960313+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:19.960437+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:20.960577+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:21.960740+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fbe15000/0x0/0x4ffc00000, data 0xd3fae7/0xe08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:22.960934+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011556 data_alloc: 218103808 data_used: 253952
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:23.961098+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:24.961296+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:25.961373+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 16171008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:26.961512+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd4156a/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 16171008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:27.961670+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 16171008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013810 data_alloc: 218103808 data_used: 253952
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:28.961838+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 16171008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.177145004s of 13.295339584s, submitted: 46
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:29.962007+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 16146432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:30.962218+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0xd416a0/0xe0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 16138240 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:31.962373+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 16138240 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:32.962554+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1018872 data_alloc: 218103808 data_used: 262144
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:33.962732+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:34.962858+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fbe0e000/0x0/0x4ffc00000, data 0xd431eb/0xe0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:35.963010+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd43150/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:36.963149+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:37.963334+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017302 data_alloc: 218103808 data_used: 262144
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:38.963493+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd43150/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:39.963630+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd43150/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.755588531s of 10.842039108s, submitted: 28
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:40.963787+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:41.963915+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:42.964184+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021300 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:43.964364+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:44.964489+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44bb3/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:45.964657+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44bb3/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:46.964846+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44bb3/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:47.965037+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021476 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:48.965210+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:49.965380+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44bb3/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:50.965543+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:51.965733+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:52.965936+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021476 data_alloc: 218103808 data_used: 270336
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.351358414s of 13.363707542s, submitted: 12
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:53.966054+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:54.966204+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:55.966360+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:56.966517+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:57.966649+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022524 data_alloc: 218103808 data_used: 274432
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:58.966798+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:59.966975+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:00.967131+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:01.967849+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:02.968030+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021658 data_alloc: 218103808 data_used: 274432
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:03.968333+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:04.968556+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0d000/0x0/0x4ffc00000, data 0xd44bb3/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.000518799s of 12.013453484s, submitted: 4
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:05.968683+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:06.968844+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:07.969001+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023426 data_alloc: 218103808 data_used: 274432
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:08.969164+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:09.969364+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:10.969539+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:11.969698+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:12.969869+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 13
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023074 data_alloc: 218103808 data_used: 274432
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:13.970039+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:14.970181+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:15.970322+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:16.970472+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.645391464s of 11.660141945s, submitted: 135
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:17.970614+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 15646720 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028518 data_alloc: 218103808 data_used: 282624
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:18.970859+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 15646720 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd46834/0xe15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:19.971029+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:20.971181+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:21.971360+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:22.971532+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028828 data_alloc: 218103808 data_used: 282624
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:23.971706+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:24.971846+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd4839f/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:25.971960+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 14581760 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:26.972113+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 14581760 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.854335785s of 10.010847092s, submitted: 61
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:27.972258+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031432 data_alloc: 218103808 data_used: 290816
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:28.972503+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:29.972638+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:30.972792+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe05000/0x0/0x4ffc00000, data 0xd49d67/0xe19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:31.972955+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:32.973203+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033200 data_alloc: 218103808 data_used: 290816
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:33.973394+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe04000/0x0/0x4ffc00000, data 0xd49e02/0xe1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:34.973600+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:35.973811+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe04000/0x0/0x4ffc00000, data 0xd49e02/0xe1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:36.974009+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:37.974199+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033200 data_alloc: 218103808 data_used: 290816
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:38.974370+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:39.974549+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:40.974673+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.005295753s of 14.014258385s, submitted: 3
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe04000/0x0/0x4ffc00000, data 0xd49e02/0xe1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:41.974828+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:42.975004+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031432 data_alloc: 218103808 data_used: 290816
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:43.975184+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:44.975388+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:45.975572+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:46.975703+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe05000/0x0/0x4ffc00000, data 0xd49d67/0xe19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:47.975886+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031432 data_alloc: 218103808 data_used: 290816
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:48.976026+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:49.976210+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe05000/0x0/0x4ffc00000, data 0xd49d67/0xe19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe05000/0x0/0x4ffc00000, data 0xd49d67/0xe19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:50.976371+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:51.976526+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:52.976682+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031448 data_alloc: 218103808 data_used: 290816
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:53.976755+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:54.977105+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:55.977258+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe05000/0x0/0x4ffc00000, data 0xd49d67/0xe19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.337747574s of 15.514533043s, submitted: 3
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:56.977374+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 14548992 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:57.977519+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 14540800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033200 data_alloc: 218103808 data_used: 290816
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:58.977634+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 14540800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe04000/0x0/0x4ffc00000, data 0xd49e02/0xe1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:59.977755+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:00.977916+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:01.978045+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:02.978198+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034616 data_alloc: 218103808 data_used: 290816
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:03.978312+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:04.978489+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 140 handle_osd_map epochs [141,142], i have 140, src has [1,142]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fbe03000/0x0/0x4ffc00000, data 0xd49ec8/0xe1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:05.978604+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fbdfc000/0x0/0x4ffc00000, data 0xd4d6b4/0xe21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:06.978721+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.194281578s of 10.365506172s, submitted: 53
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 14491648 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:07.993760+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 14491648 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041224 data_alloc: 218103808 data_used: 299008
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:08.993955+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 14491648 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4d61c/0xe20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:09.994257+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4d61c/0xe20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 14483456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:10.994480+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 14475264 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:11.994721+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:12.995458+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045606 data_alloc: 218103808 data_used: 311296
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:13.995628+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd4f099/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:14.995824+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:15.995982+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:16.996108+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:17.996290+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:18.996468+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043860 data_alloc: 218103808 data_used: 311296
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbdfc000/0x0/0x4ffc00000, data 0xd4efd2/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:19.996662+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.908679962s of 12.944479942s, submitted: 20
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:20.996795+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:21.996945+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:22.997094+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:23.997259+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048034 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:24.997417+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:25.997598+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:26.997744+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:27.997858+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:28.997975+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048034 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:29.998115+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.214165688s of 10.225300789s, submitted: 15
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:30.998346+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:31.998529+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:32.998694+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:33.998820+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048050 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:34.999051+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:35.999188+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:36.999334+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:37.999478+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:38.999664+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047170 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:39.999827+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:40.999965+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50afe/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:42.000118+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50afe/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:43.000304+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.929623604s of 12.947863579s, submitted: 6
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:44.000510+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050818 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:45.000672+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf5000/0x0/0x4ffc00000, data 0xd50bc4/0xe27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:46.000845+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:47.000952+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50afb/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:48.001119+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:49.001310+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50afb/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049824 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:50.001477+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:51.001590+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:52.001737+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 14409728 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:53.002296+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 14409728 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:54.002471+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048072 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 14409728 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:55.002620+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 14409728 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:56.002772+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 14409728 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:57.002934+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.179697037s of 14.221464157s, submitted: 13
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 14401536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:58.003119+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:59.003332+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051416 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:00.003471+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:01.003600+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50b6b/0xe27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:02.003768+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50b6b/0xe27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:03.003951+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:04.004088+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051432 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:05.004218+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 14368768 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50ad0/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:06.004448+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 14368768 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:07.004568+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.019722939s of 10.055611610s, submitted: 11
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 14344192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:08.004744+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 14344192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:09.004888+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054342 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 14344192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf4000/0x0/0x4ffc00000, data 0xd50bc7/0xe28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:10.005074+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 14344192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:11.005245+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:12.005381+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:13.005525+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:14.005759+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050778 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:15.005888+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:16.006010+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:17.006187+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:18.006364+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:19.006566+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050778 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.783651352s of 11.822710991s, submitted: 13
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:20.006733+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:21.006879+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:22.007060+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:23.007237+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:24.007315+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050794 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:25.007428+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:26.007621+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:27.007809+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:28.007954+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:29.008105+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050794 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.000278473s of 10.004323006s, submitted: 1
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:30.008241+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:31.008416+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:32.008600+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:33.008743+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:34.008895+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050778 data_alloc: 218103808 data_used: 319488
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:35.009012+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:36.009175+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:37.009306+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:38.009411+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:39.009624+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fbdf3000/0x0/0x4ffc00000, data 0xd526e4/0xe29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1056832 data_alloc: 218103808 data_used: 327680
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.260903358s of 10.339648247s, submitted: 30
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:40.009787+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:41.009913+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:42.010060+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:43.010243+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:44.010416+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057368 data_alloc: 218103808 data_used: 327680
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fbdf3000/0x0/0x4ffc00000, data 0xd5277e/0xe2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:45.010543+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:46.010688+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 13221888 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:47.010802+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0xd541e1/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 13197312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:48.010961+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 13197312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:49.011051+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063182 data_alloc: 218103808 data_used: 335872
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 13189120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:50.011185+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 13189120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.023447037s of 11.083169937s, submitted: 24
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:51.011333+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fbdf2000/0x0/0x4ffc00000, data 0xd54119/0xe2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 13164544 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:52.011516+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 13164544 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:53.011787+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0xd541e2/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 13164544 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:54.011933+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065822 data_alloc: 218103808 data_used: 344064
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 13131776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:55.012053+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 13131776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:56.012190+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbded000/0x0/0x4ffc00000, data 0xd55dc6/0xe30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 13131776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:57.012321+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 13123584 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:58.012425+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 13123584 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbdee000/0x0/0x4ffc00000, data 0xd55cff/0xe2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:59.012533+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066020 data_alloc: 218103808 data_used: 344064
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 13123584 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:00.012660+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 13115392 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:01.012849+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 13107200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:02.012956+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fbde9000/0x0/0x4ffc00000, data 0xd57804/0xe33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 13107200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:03.013107+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 13107200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:04.013254+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.151597977s of 13.277581215s, submitted: 46
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071034 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fbdea000/0x0/0x4ffc00000, data 0xd577be/0xe33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 13074432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:05.013452+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 13074432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:06.013611+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 13074432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:07.013759+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 13074432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7231 writes, 27K keys, 7231 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7231 writes, 1573 syncs, 4.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1451 writes, 3407 keys, 1451 commit groups, 1.0 writes per commit group, ingest: 1.89 MB, 0.00 MB/s
                                           Interval WAL: 1451 writes, 597 syncs, 2.43 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:08.013920+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 13074432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:09.014037+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e9cefc00
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071518 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 13058048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:10.014173+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fbde9000/0x0/0x4ffc00000, data 0xd577dc/0xe32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 14
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 13058048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:11.014246+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 13049856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:12.014426+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 13049856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:13.014621+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fbdec000/0x0/0x4ffc00000, data 0xd577dc/0xe32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 13033472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:14.014784+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073400 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.193478584s of 10.242251396s, submitted: 14
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc ms_handle_reset ms_handle_reset con 0x55c4e7e6b400
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: get_auth_request con 0x55c4e9e6dc00 auth_method 0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_configure stats_period=5
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 12394496 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:15.015037+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 9445376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:16.015210+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 9314304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:17.015433+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 9109504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:18.015618+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fabeb000/0x0/0x4ffc00000, data 0xdb7c1e/0xe93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 7462912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:19.015752+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092844 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 ms_handle_reset con 0x55c4e72bec00 session 0x55c4e957c1e0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e7ee1000
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 6963200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:20.015920+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 6963200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:21.016054+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faba8000/0x0/0x4ffc00000, data 0xdf8df7/0xed5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 7184384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:22.016212+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 5799936 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:23.016427+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 5799936 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:24.016584+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090844 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 5767168 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.244800568s of 10.530930519s, submitted: 82
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:25.016695+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fab44000/0x0/0x4ffc00000, data 0xe5c987/0xf3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 5873664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:26.016828+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faaff000/0x0/0x4ffc00000, data 0xea1618/0xf7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 5554176 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:27.016984+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 5316608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:28.017109+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 4898816 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:29.017438+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101184 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87359488 unmapped: 3817472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:30.017621+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87359488 unmapped: 3817472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:31.017746+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87695360 unmapped: 3481600 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:32.051081+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faa6d000/0x0/0x4ffc00000, data 0xf36459/0x1011000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 3391488 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:33.051516+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 3391488 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:34.051648+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114800 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 2965504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:35.051852+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.375099182s of 10.666891098s, submitted: 91
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa5e4000/0x0/0x4ffc00000, data 0xfaf493/0x108a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 3645440 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:36.052013+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 2424832 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:37.052185+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2457600 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:38.052345+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2457600 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:39.052500+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa585000/0x0/0x4ffc00000, data 0x100cbce/0x10e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111584 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa585000/0x0/0x4ffc00000, data 0x100cbce/0x10e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 2424832 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:40.052627+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 2170880 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:41.052734+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 2170880 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:42.052830+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa532000/0x0/0x4ffc00000, data 0x1060023/0x113c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:43.052964+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 2162688 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:44.053080+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125482 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:45.053202+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa4e3000/0x0/0x4ffc00000, data 0x10affc2/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:46.053316+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:47.053437+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89423872 unmapped: 1753088 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.628952026s of 11.860681534s, submitted: 77
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:48.053565+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 729088 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:49.053666+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 729088 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130934 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:50.053840+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 352256 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa489000/0x0/0x4ffc00000, data 0x110a37c/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:51.053970+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 352256 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:52.054130+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 352256 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:53.054324+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90685440 unmapped: 1540096 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa469000/0x0/0x4ffc00000, data 0x112984a/0x1205000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:54.054523+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90759168 unmapped: 1466368 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131374 data_alloc: 218103808 data_used: 356352
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x115775e/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:55.054655+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90759168 unmapped: 1466368 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:56.055373+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90382336 unmapped: 1843200 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:57.055537+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90382336 unmapped: 1843200 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:58.055668+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90382336 unmapped: 1843200 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.796188354s of 10.946480751s, submitted: 45
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:59.055777+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90390528 unmapped: 1835008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128236 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x11577fd/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:00.055903+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90390528 unmapped: 1835008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:01.056046+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:02.056195+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x11577fd/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x11577fd/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:03.056360+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:04.056492+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127370 data_alloc: 218103808 data_used: 352256
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:05.056644+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa438000/0x0/0x4ffc00000, data 0x1159348/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:06.056821+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:07.056966+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:08.057108+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.159050941s of 10.226642609s, submitted: 30
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:09.057330+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129990 data_alloc: 218103808 data_used: 360448
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:10.057460+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x11592ad/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,1])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:11.057582+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90390528 unmapped: 1835008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:12.057738+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90415104 unmapped: 1810432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 15
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:13.057932+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90275840 unmapped: 1949696 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:14.058087+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90292224 unmapped: 1933312 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137658 data_alloc: 218103808 data_used: 368640
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:15.058406+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90292224 unmapped: 1933312 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa433000/0x0/0x4ffc00000, data 0x115c991/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:16.058638+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:17.058819+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x115cab7/0x123d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:18.058951+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x115cab7/0x123d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:19.059121+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141322 data_alloc: 218103808 data_used: 368640
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:20.059361+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.526574135s of 12.174523354s, submitted: 146
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:21.059499+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa42d000/0x0/0x4ffc00000, data 0x115e51a/0x1240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:22.059641+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa42d000/0x0/0x4ffc00000, data 0x115e51a/0x1240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:23.059799+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:24.059953+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145016 data_alloc: 218103808 data_used: 376832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:25.060115+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:26.060312+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90218496 unmapped: 2007040 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:27.060438+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90218496 unmapped: 2007040 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa42d000/0x0/0x4ffc00000, data 0x115e51a/0x1240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:28.060595+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90218496 unmapped: 2007040 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:29.060715+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90218496 unmapped: 2007040 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148470 data_alloc: 218103808 data_used: 389120
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:30.060883+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90218496 unmapped: 2007040 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:31.061001+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.807652473s of 10.892947197s, submitted: 47
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:32.061131+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1161b63/0x1246000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:33.061324+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:34.061473+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152508 data_alloc: 218103808 data_used: 389120
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x1161bfe/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:35.061659+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:36.061839+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:37.062013+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:38.062145+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:39.062308+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0x11636ee/0x1249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90243072 unmapped: 1982464 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155816 data_alloc: 218103808 data_used: 397312
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:40.062487+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90243072 unmapped: 1982464 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:41.062616+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90259456 unmapped: 1966080 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 156 ms_handle_reset con 0x55c4e9cefc00 session 0x55c4ea21a3c0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:42.062719+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90480640 unmapped: 1744896 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.805146217s of 10.963012695s, submitted: 200
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:43.062922+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 16
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:44.063082+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa41f000/0x0/0x4ffc00000, data 0x1166d67/0x124f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161610 data_alloc: 218103808 data_used: 397312
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:45.063281+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:46.063452+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:47.063628+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:48.063809+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:49.064002+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 157 handle_osd_map epochs [158,159], i have 157, src has [1,159]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x1166ccc/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168756 data_alloc: 218103808 data_used: 409600
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:50.064184+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:51.064372+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa419000/0x0/0x4ffc00000, data 0x116a4e8/0x1254000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:52.064562+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:53.064775+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa415000/0x0/0x4ffc00000, data 0x116bf4b/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:54.064971+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172578 data_alloc: 218103808 data_used: 409600
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:55.065096+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:56.065222+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:57.065384+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa415000/0x0/0x4ffc00000, data 0x116bf4b/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:58.065509+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.826541901s of 16.015865326s, submitted: 77
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:59.065659+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170834 data_alloc: 218103808 data_used: 417792
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:00.065776+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 17
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:01.065907+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:02.066106+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92635136 unmapped: 638976 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:03.066307+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92635136 unmapped: 638976 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116bf4b/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:04.066447+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x116db61/0x125a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178654 data_alloc: 218103808 data_used: 425984
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:05.066602+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 589824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x116f787/0x125d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:06.066786+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 573440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:07.066971+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 573440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:08.067153+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92708864 unmapped: 565248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:09.067361+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92708864 unmapped: 565248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177118 data_alloc: 218103808 data_used: 425984
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:10.067566+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92708864 unmapped: 565248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.937370300s of 12.083664894s, submitted: 69
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:11.067679+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa40e000/0x0/0x4ffc00000, data 0x11711fa/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:12.067828+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:13.067996+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa40e000/0x0/0x4ffc00000, data 0x11711fa/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:14.068158+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:15.068368+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:16.068502+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:17.068689+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:18.068825+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:19.069020+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:20.069214+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:21.069411+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:22.069610+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:23.069875+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:24.070064+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:25.070204+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:26.070343+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:27.070501+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:28.070690+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:29.070807+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:30.070963+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:31.071105+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:32.071299+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:33.071485+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:34.071631+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:35.071730+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:36.071841+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:37.071989+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:38.072188+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:39.072358+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:40.072490+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:41.072652+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:42.072790+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:43.072969+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:44.073155+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:45.073331+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:46.073492+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:47.073654+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:48.073790+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:49.074000+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:50.074412+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:51.074621+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:52.074808+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:53.075022+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:54.075217+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:55.075385+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:56.075618+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.621803284s of 45.639411926s, submitted: 14
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:57.075755+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x11711fa/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:58.075962+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:59.076150+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180768 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:00.076408+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:01.076561+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x11711fa/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:02.076736+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:03.076887+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:04.077022+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180592 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:05.077147+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x11711fa/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:06.077281+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:07.077428+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:08.077556+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:09.077722+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179016 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:10.077843+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:11.077984+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.522039413s of 15.537490845s, submitted: 6
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:12.078085+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:13.078211+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 18
Nov 29 05:45:39 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:14.078325+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178824 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:15.078468+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:16.078623+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:17.078772+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:18.078893+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:19.079048+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:20.079181+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:21.079309+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:22.079407+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:23.079597+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:24.079713+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179016 data_alloc: 218103808 data_used: 434176
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:25.079844+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.762817383s of 13.779428482s, submitted: 137
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 163 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:26.079977+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa40c000/0x0/0x4ffc00000, data 0x1172d45/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:27.080135+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:28.080319+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:29.080467+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183190 data_alloc: 218103808 data_used: 442368
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:30.080614+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa40c000/0x0/0x4ffc00000, data 0x1172d45/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:31.080734+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:32.080907+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa40c000/0x0/0x4ffc00000, data 0x1172d45/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:33.081075+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:34.081239+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183190 data_alloc: 218103808 data_used: 442368
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:35.081387+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:36.081523+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa40d000/0x0/0x4ffc00000, data 0x1172d45/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:37.081731+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:38.081868+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.566138268s of 13.058055878s, submitted: 22
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:39.081985+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 344064 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186468 data_alloc: 218103808 data_used: 450560
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:40.082135+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 344064 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:41.082299+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 327680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:42.082466+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa409000/0x0/0x4ffc00000, data 0x11747a8/0x1264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 327680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:43.082633+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 327680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:44.082807+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa409000/0x0/0x4ffc00000, data 0x11747a8/0x1264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189266 data_alloc: 218103808 data_used: 450560
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:45.083010+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:46.083194+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:47.083387+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:48.083535+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:49.083666+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa406000/0x0/0x4ffc00000, data 0x11763be/0x1267000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189266 data_alloc: 218103808 data_used: 450560
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:50.083869+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.003107071s of 12.074946404s, submitted: 41
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:51.084040+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:52.084179+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:53.084341+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:54.084467+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:55.084928+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192240 data_alloc: 218103808 data_used: 450560
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:56.085116+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:57.085246+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:58.085414+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:59.085585+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:00.085703+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192400 data_alloc: 218103808 data_used: 454656
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:01.085857+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:02.086057+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:03.086251+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:04.086502+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:05.086623+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192400 data_alloc: 218103808 data_used: 454656
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:06.086766+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:07.086868+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:08.087040+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:09.087171+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:10.087304+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192400 data_alloc: 218103808 data_used: 454656
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:11.087417+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:12.087816+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.950380325s of 21.958806992s, submitted: 11
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:13.087988+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:14.088107+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:15.088251+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191712 data_alloc: 218103808 data_used: 454656
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:16.088447+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:17.088607+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177edc/0x126b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:18.088756+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:19.088901+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177edc/0x126b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:20.089082+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193480 data_alloc: 218103808 data_used: 454656
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:21.089293+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:22.089437+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:23.089578+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:24.089755+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:25.089920+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191696 data_alloc: 218103808 data_used: 454656
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:26.090054+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:27.090213+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.996479988s of 15.018195152s, submitted: 6
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:28.090358+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:29.090523+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:30.090699+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191712 data_alloc: 218103808 data_used: 454656
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:31.090866+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:32.091030+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:33.091201+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92798976 unmapped: 1523712 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:34.091316+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92798976 unmapped: 1523712 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:35.091466+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196034 data_alloc: 218103808 data_used: 454656
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 1392640 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:36.091610+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3e4000/0x0/0x4ffc00000, data 0x11975fd/0x128a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 1392640 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:37.091774+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 1318912 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.935274124s of 10.001768112s, submitted: 13
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:38.091924+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 1187840 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:39.092058+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 1187840 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:40.092182+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa397000/0x0/0x4ffc00000, data 0x11e3e61/0x12d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204328 data_alloc: 218103808 data_used: 454656
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93208576 unmapped: 1114112 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:41.092302+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 909312 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:42.092509+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 909312 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa36f000/0x0/0x4ffc00000, data 0x120b4e7/0x12ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:43.092675+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 729088 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:44.092797+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1638400 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:45.092915+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206934 data_alloc: 218103808 data_used: 454656
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93831168 unmapped: 1540096 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:46.093062+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93831168 unmapped: 1540096 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:47.093182+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93904896 unmapped: 1466368 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.703379631s of 10.000169754s, submitted: 24
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:48.093302+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2fb000/0x0/0x4ffc00000, data 0x12806a1/0x1373000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 94068736 unmapped: 1302528 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:49.093422+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 94093312 unmapped: 1277952 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:50.093570+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205894 data_alloc: 218103808 data_used: 454656
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 2187264 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:51.093681+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 2187264 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:52.093846+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 2187264 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2f7000/0x0/0x4ffc00000, data 0x1284b19/0x1377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:53.094057+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 974848 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:54.094208+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2df000/0x0/0x4ffc00000, data 0x129c984/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 925696 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:55.094368+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212656 data_alloc: 218103808 data_used: 462848
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 925696 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:56.094520+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95518720 unmapped: 901120 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:57.094683+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95518720 unmapped: 901120 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:58.094874+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12c2c36/0x13b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95199232 unmapped: 1220608 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:59.095052+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 2113536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:00.095189+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214480 data_alloc: 218103808 data_used: 462848
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 2113536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.013223648s of 13.096959114s, submitted: 33
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:01.095351+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:02.095504+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:03.095658+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:04.095854+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2b3000/0x0/0x4ffc00000, data 0x12c4699/0x13ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:05.095997+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216286 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:06.096207+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:07.096389+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2b3000/0x0/0x4ffc00000, data 0x12c4699/0x13ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:08.096535+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 1998848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:09.096700+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 1998848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:10.096827+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 1998848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:11.096968+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:12.097097+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:13.097239+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:14.097393+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:15.097513+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:16.097663+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:17.097784+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:18.097941+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:19.098064+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:20.098215+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:21.098394+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:22.098548+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:23.288973+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:24.289155+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:25.289303+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:26.289656+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:27.289782+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:28.289899+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:29.289992+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:30.290137+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:31.290241+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:32.290350+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:33.290483+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:34.290601+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:35.290748+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:36.290879+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:37.290993+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:38.291115+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:39.291319+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:40.291734+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:41.291854+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:42.291962+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:43.292141+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:44.292289+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:45.292473+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:46.292626+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:47.292755+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:48.292882+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:49.293009+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:50.293179+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:51.293315+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:52.293482+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:53.293684+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:54.293797+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:55.293914+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:56.294061+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:57.294223+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:58.294323+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:59.294449+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:00.294586+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:01.294716+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:02.294832+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:03.294994+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:04.295133+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:05.295309+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:45:39 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:45:39 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95477760 unmapped: 1990656 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:06.295530+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: do_command 'config diff' '{prefix=config diff}'
Nov 29 05:45:39 compute-0 ceph-osd[89151]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 05:45:39 compute-0 ceph-osd[89151]: do_command 'config show' '{prefix=config show}'
Nov 29 05:45:39 compute-0 ceph-osd[89151]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 05:45:39 compute-0 ceph-osd[89151]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 05:45:39 compute-0 ceph-osd[89151]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 05:45:39 compute-0 ceph-osd[89151]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 05:45:39 compute-0 ceph-osd[89151]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95330304 unmapped: 2138112 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:07.295671+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95346688 unmapped: 2121728 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:08.295927+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:45:39 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95346688 unmapped: 2121728 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:45:39 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:09.296056+0000)
Nov 29 05:45:39 compute-0 ceph-osd[89151]: do_command 'log dump' '{prefix=log dump}'
Nov 29 05:45:39 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 05:45:39 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2008575624' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 05:45:39 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14655 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:39 compute-0 nova_compute[254898]: 2025-11-29 05:45:39.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:45:40 compute-0 ceph-mon[75176]: from='client.14641 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:40 compute-0 ceph-mon[75176]: from='client.14643 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:40 compute-0 ceph-mon[75176]: from='client.14647 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:40 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2214700980' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 05:45:40 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2008575624' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 05:45:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 05:45:40 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750042925' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 05:45:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 05:45:40 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2610871547' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 05:45:40 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 05:45:40 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 05:45:40 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:41 compute-0 ceph-mon[75176]: from='client.14651 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:41 compute-0 ceph-mon[75176]: from='client.14655 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:41 compute-0 ceph-mon[75176]: pgmap v1269: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:41 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/750042925' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 05:45:41 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2610871547' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 05:45:41 compute-0 ceph-mon[75176]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 05:45:41 compute-0 ceph-mon[75176]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 05:45:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 05:45:41 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/117032659' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:45:41
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'images']
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f995b6d0>)]
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f97f0fa0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fa4f976c4f0>)]
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:45:41 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14667 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:41 compute-0 nova_compute[254898]: 2025-11-29 05:45:41.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:45:41 compute-0 nova_compute[254898]: 2025-11-29 05:45:41.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:45:41 compute-0 nova_compute[254898]: 2025-11-29 05:45:41.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:45:41 compute-0 nova_compute[254898]: 2025-11-29 05:45:41.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:45:41 compute-0 nova_compute[254898]: 2025-11-29 05:45:41.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:45:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:45:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:45:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 05:45:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724083621' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 05:45:42 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/117032659' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 05:45:42 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2724083621' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 05:45:42 compute-0 systemd[1]: Starting Hostname Service...
Nov 29 05:45:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:42 compute-0 systemd[1]: Started Hostname Service.
Nov 29 05:45:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 05:45:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2913683518' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 05:45:42 compute-0 nova_compute[254898]: 2025-11-29 05:45:42.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:45:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 29 05:45:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1363446046' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 05:45:43 compute-0 ceph-mon[75176]: from='client.14667 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:43 compute-0 ceph-mon[75176]: pgmap v1270: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:43 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2913683518' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 05:45:43 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1363446046' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 05:45:43 compute-0 ceph-mon[75176]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.csskcz(active, since 37m)
Nov 29 05:45:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 29 05:45:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2967619425' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 05:45:43 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14677 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:43 compute-0 nova_compute[254898]: 2025-11-29 05:45:43.947 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:45:43 compute-0 nova_compute[254898]: 2025-11-29 05:45:43.947 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:45:43 compute-0 nova_compute[254898]: 2025-11-29 05:45:43.947 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:45:43 compute-0 nova_compute[254898]: 2025-11-29 05:45:43.947 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:45:43 compute-0 nova_compute[254898]: 2025-11-29 05:45:43.948 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:45:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:44 compute-0 ceph-mon[75176]: mgrmap e19: compute-0.csskcz(active, since 37m)
Nov 29 05:45:44 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2967619425' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 05:45:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 29 05:45:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1564296857' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 05:45:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:45:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/428810738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:45:44 compute-0 nova_compute[254898]: 2025-11-29 05:45:44.382 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:45:44 compute-0 nova_compute[254898]: 2025-11-29 05:45:44.553 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:45:44 compute-0 nova_compute[254898]: 2025-11-29 05:45:44.554 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4835MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:45:44 compute-0 nova_compute[254898]: 2025-11-29 05:45:44.555 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:45:44 compute-0 nova_compute[254898]: 2025-11-29 05:45:44.555 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:45:44 compute-0 nova_compute[254898]: 2025-11-29 05:45:44.632 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:45:44 compute-0 nova_compute[254898]: 2025-11-29 05:45:44.632 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:45:44 compute-0 nova_compute[254898]: 2025-11-29 05:45:44.650 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:45:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 29 05:45:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2591206970' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 05:45:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:45:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1536835265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:45:45 compute-0 nova_compute[254898]: 2025-11-29 05:45:45.122 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:45:45 compute-0 nova_compute[254898]: 2025-11-29 05:45:45.128 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:45:45 compute-0 nova_compute[254898]: 2025-11-29 05:45:45.164 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:45:45 compute-0 nova_compute[254898]: 2025-11-29 05:45:45.166 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:45:45 compute-0 nova_compute[254898]: 2025-11-29 05:45:45.166 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:45:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14687 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:45 compute-0 ceph-mon[75176]: from='client.14677 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:45 compute-0 ceph-mon[75176]: pgmap v1271: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:45 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1564296857' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 05:45:45 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/428810738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:45:45 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2591206970' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 05:45:45 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1536835265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:45:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 29 05:45:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4044120211' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 05:45:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:45 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14691 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:46 compute-0 ceph-mon[75176]: from='client.14687 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:46 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/4044120211' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 05:45:46 compute-0 ceph-mon[75176]: from='client.14691 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:46 compute-0 ceph-mon[75176]: pgmap v1272: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:46 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14693 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 29 05:45:46 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3730237594' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 05:45:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 29 05:45:47 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/742037056' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 05:45:47 compute-0 ceph-mon[75176]: from='client.14693 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:47 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3730237594' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 05:45:47 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/742037056' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 05:45:47 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14699 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14701 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:45:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:48 compute-0 ceph-mon[75176]: from='client.14699 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:48 compute-0 ceph-mon[75176]: from='client.14701 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:48 compute-0 ceph-mon[75176]: pgmap v1273: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 29 05:45:48 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4259878018' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 05:45:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 29 05:45:48 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3872949790' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 05:45:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14707 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:49 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/4259878018' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 05:45:49 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3872949790' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 05:45:49 compute-0 ceph-mon[75176]: from='client.14707 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:49 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14709 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:49 compute-0 podman[279360]: 2025-11-29 05:45:49.910005558 +0000 UTC m=+0.085924244 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 05:45:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 05:45:50 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519051548' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 05:45:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 29 05:45:50 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/322293043' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 29 05:45:50 compute-0 ceph-mon[75176]: from='client.14709 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:45:50 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2519051548' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 05:45:50 compute-0 ceph-mon[75176]: pgmap v1274: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:50 compute-0 sudo[279778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:45:50 compute-0 sudo[279778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:50 compute-0 sudo[279778]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:50 compute-0 sudo[279841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:45:50 compute-0 sudo[279841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:50 compute-0 sudo[279841]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:50 compute-0 ovs-appctl[279934]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 05:45:50 compute-0 sudo[279901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:45:50 compute-0 sudo[279901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:50 compute-0 sudo[279901]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:50 compute-0 ovs-appctl[279954]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 05:45:50 compute-0 ovs-appctl[279975]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 05:45:50 compute-0 sudo[279960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:45:50 compute-0 sudo[279960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 29 05:45:50 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/709355141' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 29 05:45:51 compute-0 nova_compute[254898]: 2025-11-29 05:45:51.163 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:45:51 compute-0 nova_compute[254898]: 2025-11-29 05:45:51.164 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:45:51 compute-0 nova_compute[254898]: 2025-11-29 05:45:51.164 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:45:51 compute-0 nova_compute[254898]: 2025-11-29 05:45:51.164 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:45:51 compute-0 nova_compute[254898]: 2025-11-29 05:45:51.183 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:45:51 compute-0 sudo[279960]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:45:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:45:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:45:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:45:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14717 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:45:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d7389116-3c6e-40af-aebf-843a044c6dce does not exist
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 5ac551c5-a269-451d-994d-76afc7ddb33d does not exist
Nov 29 05:45:51 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 4b477d2c-2e20-457b-8520-6e3c4a64d599 does not exist
Nov 29 05:45:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:45:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:45:51 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/322293043' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 29 05:45:51 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/709355141' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 29 05:45:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:45:51 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:45:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:45:51 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:45:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:45:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:45:51 compute-0 sudo[280460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:45:51 compute-0 sudo[280460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:51 compute-0 sudo[280460]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:51 compute-0 sudo[280506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:45:51 compute-0 sudo[280506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:51 compute-0 sudo[280506]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:51 compute-0 sudo[280557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:45:51 compute-0 sudo[280557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:51 compute-0 sudo[280557]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:51 compute-0 sudo[280609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:45:51 compute-0 sudo[280609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 05:45:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527739268' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 05:45:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:52 compute-0 podman[280823]: 2025-11-29 05:45:52.346620456 +0000 UTC m=+0.055174404 container create cab63af85dc2c9713cf5b4476135c299adaa93ec09c0d6de2a5f1be9ad905af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gates, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:45:52 compute-0 systemd[1]: Started libpod-conmon-cab63af85dc2c9713cf5b4476135c299adaa93ec09c0d6de2a5f1be9ad905af5.scope.
Nov 29 05:45:52 compute-0 podman[280823]: 2025-11-29 05:45:52.317519333 +0000 UTC m=+0.026073301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:45:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:45:52 compute-0 podman[280823]: 2025-11-29 05:45:52.459174283 +0000 UTC m=+0.167728251 container init cab63af85dc2c9713cf5b4476135c299adaa93ec09c0d6de2a5f1be9ad905af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:45:52 compute-0 podman[280823]: 2025-11-29 05:45:52.469302953 +0000 UTC m=+0.177856891 container start cab63af85dc2c9713cf5b4476135c299adaa93ec09c0d6de2a5f1be9ad905af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gates, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:45:52 compute-0 podman[280823]: 2025-11-29 05:45:52.473741269 +0000 UTC m=+0.182295237 container attach cab63af85dc2c9713cf5b4476135c299adaa93ec09c0d6de2a5f1be9ad905af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 05:45:52 compute-0 festive_gates[280857]: 167 167
Nov 29 05:45:52 compute-0 systemd[1]: libpod-cab63af85dc2c9713cf5b4476135c299adaa93ec09c0d6de2a5f1be9ad905af5.scope: Deactivated successfully.
Nov 29 05:45:52 compute-0 conmon[280857]: conmon cab63af85dc2c9713cf5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cab63af85dc2c9713cf5b4476135c299adaa93ec09c0d6de2a5f1be9ad905af5.scope/container/memory.events
Nov 29 05:45:52 compute-0 podman[280823]: 2025-11-29 05:45:52.481995465 +0000 UTC m=+0.190549403 container died cab63af85dc2c9713cf5b4476135c299adaa93ec09c0d6de2a5f1be9ad905af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gates, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 05:45:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 29 05:45:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/744740157' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 29 05:45:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f502444c41b5f08dcd33748ee5fcc47e23137f7e6c7876c051beb5527f8194cd-merged.mount: Deactivated successfully.
Nov 29 05:45:52 compute-0 podman[280823]: 2025-11-29 05:45:52.628351437 +0000 UTC m=+0.336905375 container remove cab63af85dc2c9713cf5b4476135c299adaa93ec09c0d6de2a5f1be9ad905af5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:45:52 compute-0 systemd[1]: libpod-conmon-cab63af85dc2c9713cf5b4476135c299adaa93ec09c0d6de2a5f1be9ad905af5.scope: Deactivated successfully.
Nov 29 05:45:52 compute-0 ceph-mon[75176]: from='client.14717 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:45:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:45:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:45:52 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:45:52 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2527739268' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 05:45:52 compute-0 ceph-mon[75176]: pgmap v1275: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:52 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/744740157' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 29 05:45:52 compute-0 podman[280941]: 2025-11-29 05:45:52.792280926 +0000 UTC m=+0.050311658 container create 66aaf087bd1a022f71a5cc16df40938eefad0544ac58e8f36515f6ba689d8697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:45:52 compute-0 systemd[1]: Started libpod-conmon-66aaf087bd1a022f71a5cc16df40938eefad0544ac58e8f36515f6ba689d8697.scope.
Nov 29 05:45:52 compute-0 podman[280941]: 2025-11-29 05:45:52.765442148 +0000 UTC m=+0.023472900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:45:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4e086b114b2e8a29cc648110e181063e464687a3fe9499e285ee91bafc9284/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4e086b114b2e8a29cc648110e181063e464687a3fe9499e285ee91bafc9284/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4e086b114b2e8a29cc648110e181063e464687a3fe9499e285ee91bafc9284/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4e086b114b2e8a29cc648110e181063e464687a3fe9499e285ee91bafc9284/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd4e086b114b2e8a29cc648110e181063e464687a3fe9499e285ee91bafc9284/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 29 05:45:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3528279639' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 29 05:45:53 compute-0 podman[280941]: 2025-11-29 05:45:53.03629848 +0000 UTC m=+0.294329242 container init 66aaf087bd1a022f71a5cc16df40938eefad0544ac58e8f36515f6ba689d8697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:45:53 compute-0 podman[280941]: 2025-11-29 05:45:53.043359748 +0000 UTC m=+0.301390480 container start 66aaf087bd1a022f71a5cc16df40938eefad0544ac58e8f36515f6ba689d8697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:45:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 29 05:45:53 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3937455648' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 29 05:45:53 compute-0 podman[280941]: 2025-11-29 05:45:53.474419141 +0000 UTC m=+0.732449873 container attach 66aaf087bd1a022f71a5cc16df40938eefad0544ac58e8f36515f6ba689d8697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:45:53 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14727 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:54 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3528279639' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 29 05:45:54 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3937455648' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 29 05:45:54 compute-0 wizardly_heyrovsky[280969]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:45:54 compute-0 wizardly_heyrovsky[280969]: --> relative data size: 1.0
Nov 29 05:45:54 compute-0 wizardly_heyrovsky[280969]: --> All data devices are unavailable
Nov 29 05:45:54 compute-0 systemd[1]: libpod-66aaf087bd1a022f71a5cc16df40938eefad0544ac58e8f36515f6ba689d8697.scope: Deactivated successfully.
Nov 29 05:45:54 compute-0 conmon[280969]: conmon 66aaf087bd1a022f71a5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66aaf087bd1a022f71a5cc16df40938eefad0544ac58e8f36515f6ba689d8697.scope/container/memory.events
Nov 29 05:45:54 compute-0 podman[280941]: 2025-11-29 05:45:54.611391034 +0000 UTC m=+1.869421766 container died 66aaf087bd1a022f71a5cc16df40938eefad0544ac58e8f36515f6ba689d8697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:45:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd4e086b114b2e8a29cc648110e181063e464687a3fe9499e285ee91bafc9284-merged.mount: Deactivated successfully.
Nov 29 05:45:54 compute-0 podman[280941]: 2025-11-29 05:45:54.706968827 +0000 UTC m=+1.964999559 container remove 66aaf087bd1a022f71a5cc16df40938eefad0544ac58e8f36515f6ba689d8697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:45:54 compute-0 systemd[1]: libpod-conmon-66aaf087bd1a022f71a5cc16df40938eefad0544ac58e8f36515f6ba689d8697.scope: Deactivated successfully.
Nov 29 05:45:54 compute-0 sudo[280609]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:54 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 29 05:45:54 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1132741405' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 29 05:45:54 compute-0 sudo[281195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:45:54 compute-0 sudo[281195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:54 compute-0 sudo[281195]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:54 compute-0 sudo[281226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:45:54 compute-0 sudo[281226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:54 compute-0 sudo[281226]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:54 compute-0 sudo[281256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:45:54 compute-0 sudo[281256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:54 compute-0 sudo[281256]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:54 compute-0 sudo[281307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:45:54 compute-0 sudo[281307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 29 05:45:55 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1369613957' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 29 05:45:55 compute-0 podman[281381]: 2025-11-29 05:45:55.420695444 +0000 UTC m=+0.126012088 container create b251ba4576f391f4f3a195f08b76e7fc006b468a7cc97d97263df8eb7f7c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:45:55 compute-0 podman[281381]: 2025-11-29 05:45:55.325588002 +0000 UTC m=+0.030904666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:45:55 compute-0 ceph-mon[75176]: from='client.14727 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:55 compute-0 ceph-mon[75176]: pgmap v1276: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:55 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1132741405' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 29 05:45:55 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1369613957' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 29 05:45:55 compute-0 systemd[1]: Started libpod-conmon-b251ba4576f391f4f3a195f08b76e7fc006b468a7cc97d97263df8eb7f7c36a9.scope.
Nov 29 05:45:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:45:55 compute-0 podman[281381]: 2025-11-29 05:45:55.489707746 +0000 UTC m=+0.195024410 container init b251ba4576f391f4f3a195f08b76e7fc006b468a7cc97d97263df8eb7f7c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:45:55 compute-0 podman[281381]: 2025-11-29 05:45:55.495986555 +0000 UTC m=+0.201303199 container start b251ba4576f391f4f3a195f08b76e7fc006b468a7cc97d97263df8eb7f7c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:45:55 compute-0 podman[281381]: 2025-11-29 05:45:55.4995113 +0000 UTC m=+0.204827954 container attach b251ba4576f391f4f3a195f08b76e7fc006b468a7cc97d97263df8eb7f7c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:45:55 compute-0 admiring_heyrovsky[281426]: 167 167
Nov 29 05:45:55 compute-0 systemd[1]: libpod-b251ba4576f391f4f3a195f08b76e7fc006b468a7cc97d97263df8eb7f7c36a9.scope: Deactivated successfully.
Nov 29 05:45:55 compute-0 podman[281381]: 2025-11-29 05:45:55.501959047 +0000 UTC m=+0.207275691 container died b251ba4576f391f4f3a195f08b76e7fc006b468a7cc97d97263df8eb7f7c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 05:45:55 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14733 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f14fc18c8b9cd8e050635f349f58e115f2d1121ea2e06ab79eef810667e36765-merged.mount: Deactivated successfully.
Nov 29 05:45:55 compute-0 podman[281381]: 2025-11-29 05:45:55.788618446 +0000 UTC m=+0.493935110 container remove b251ba4576f391f4f3a195f08b76e7fc006b468a7cc97d97263df8eb7f7c36a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:45:55 compute-0 systemd[1]: libpod-conmon-b251ba4576f391f4f3a195f08b76e7fc006b468a7cc97d97263df8eb7f7c36a9.scope: Deactivated successfully.
Nov 29 05:45:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:45:55 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 29 05:45:55 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3305663646' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 29 05:45:55 compute-0 podman[281514]: 2025-11-29 05:45:55.9914419 +0000 UTC m=+0.088115217 container create bad4436ef2f3348d0e1bf872b41c0fece3bfb5a169cd38186c59ef5d0fe8aa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 05:45:56 compute-0 podman[281514]: 2025-11-29 05:45:55.924066038 +0000 UTC m=+0.020739385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:45:56 compute-0 systemd[1]: Started libpod-conmon-bad4436ef2f3348d0e1bf872b41c0fece3bfb5a169cd38186c59ef5d0fe8aa1a.scope.
Nov 29 05:45:56 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7425edad2fc0662f73afe27509b7268762b672a8991f9e34f8383798b0f56305/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7425edad2fc0662f73afe27509b7268762b672a8991f9e34f8383798b0f56305/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7425edad2fc0662f73afe27509b7268762b672a8991f9e34f8383798b0f56305/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7425edad2fc0662f73afe27509b7268762b672a8991f9e34f8383798b0f56305/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:56 compute-0 podman[281514]: 2025-11-29 05:45:56.157189132 +0000 UTC m=+0.253862459 container init bad4436ef2f3348d0e1bf872b41c0fece3bfb5a169cd38186c59ef5d0fe8aa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:45:56 compute-0 podman[281514]: 2025-11-29 05:45:56.164564068 +0000 UTC m=+0.261237385 container start bad4436ef2f3348d0e1bf872b41c0fece3bfb5a169cd38186c59ef5d0fe8aa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:45:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:56 compute-0 podman[281514]: 2025-11-29 05:45:56.233913597 +0000 UTC m=+0.330586944 container attach bad4436ef2f3348d0e1bf872b41c0fece3bfb5a169cd38186c59ef5d0fe8aa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 05:45:56 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14737 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:56 compute-0 ceph-mon[75176]: from='client.14733 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:56 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3305663646' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 29 05:45:56 compute-0 ceph-mon[75176]: pgmap v1277: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 426 B/s wr, 0 op/s
Nov 29 05:45:56 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14739 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]: {
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:     "0": [
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:         {
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "devices": [
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "/dev/loop3"
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             ],
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_name": "ceph_lv0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_size": "21470642176",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "name": "ceph_lv0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "tags": {
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.cluster_name": "ceph",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.crush_device_class": "",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.encrypted": "0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.osd_id": "0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.type": "block",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.vdo": "0"
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             },
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "type": "block",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "vg_name": "ceph_vg0"
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:         }
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:     ],
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:     "1": [
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:         {
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "devices": [
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "/dev/loop4"
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             ],
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_name": "ceph_lv1",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_size": "21470642176",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "name": "ceph_lv1",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "tags": {
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.cluster_name": "ceph",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.crush_device_class": "",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.encrypted": "0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.osd_id": "1",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.type": "block",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.vdo": "0"
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             },
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "type": "block",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "vg_name": "ceph_vg1"
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:         }
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:     ],
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:     "2": [
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:         {
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "devices": [
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "/dev/loop5"
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             ],
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_name": "ceph_lv2",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_size": "21470642176",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "name": "ceph_lv2",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "tags": {
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.cluster_name": "ceph",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.crush_device_class": "",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.encrypted": "0",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.osd_id": "2",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.type": "block",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:                 "ceph.vdo": "0"
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             },
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "type": "block",
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:             "vg_name": "ceph_vg2"
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:         }
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]:     ]
Nov 29 05:45:56 compute-0 inspiring_thompson[281537]: }
Nov 29 05:45:56 compute-0 systemd[1]: libpod-bad4436ef2f3348d0e1bf872b41c0fece3bfb5a169cd38186c59ef5d0fe8aa1a.scope: Deactivated successfully.
Nov 29 05:45:56 compute-0 podman[281514]: 2025-11-29 05:45:56.95968953 +0000 UTC m=+1.056362847 container died bad4436ef2f3348d0e1bf872b41c0fece3bfb5a169cd38186c59ef5d0fe8aa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:45:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-7425edad2fc0662f73afe27509b7268762b672a8991f9e34f8383798b0f56305-merged.mount: Deactivated successfully.
Nov 29 05:45:57 compute-0 podman[281514]: 2025-11-29 05:45:57.160685051 +0000 UTC m=+1.257358368 container remove bad4436ef2f3348d0e1bf872b41c0fece3bfb5a169cd38186c59ef5d0fe8aa1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:45:57 compute-0 systemd[1]: libpod-conmon-bad4436ef2f3348d0e1bf872b41c0fece3bfb5a169cd38186c59ef5d0fe8aa1a.scope: Deactivated successfully.
Nov 29 05:45:57 compute-0 sudo[281307]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:57 compute-0 sudo[281637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:45:57 compute-0 sudo[281637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:57 compute-0 sudo[281637]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 29 05:45:57 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1070460375' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 29 05:45:57 compute-0 sudo[281662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:45:57 compute-0 sudo[281662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:57 compute-0 sudo[281662]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:57 compute-0 sudo[281691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:45:57 compute-0 sudo[281691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:57 compute-0 sudo[281691]: pam_unix(sudo:session): session closed for user root
Nov 29 05:45:57 compute-0 sudo[281716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:45:57 compute-0 sudo[281716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:45:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 29 05:45:57 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2412843339' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 29 05:45:57 compute-0 ceph-mon[75176]: from='client.14737 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:57 compute-0 ceph-mon[75176]: from='client.14739 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:57 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1070460375' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 29 05:45:57 compute-0 podman[281807]: 2025-11-29 05:45:57.766507242 +0000 UTC m=+0.084518362 container create 28ca7da140344fb5cc1b0856ea27738c834f522c35125356054e5ed4bcb74321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:45:57 compute-0 podman[281807]: 2025-11-29 05:45:57.700679486 +0000 UTC m=+0.018690586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:45:57 compute-0 systemd[1]: Started libpod-conmon-28ca7da140344fb5cc1b0856ea27738c834f522c35125356054e5ed4bcb74321.scope.
Nov 29 05:45:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:45:57 compute-0 podman[281807]: 2025-11-29 05:45:57.923045544 +0000 UTC m=+0.241056654 container init 28ca7da140344fb5cc1b0856ea27738c834f522c35125356054e5ed4bcb74321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:45:57 compute-0 podman[281807]: 2025-11-29 05:45:57.930196055 +0000 UTC m=+0.248207135 container start 28ca7da140344fb5cc1b0856ea27738c834f522c35125356054e5ed4bcb74321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:45:57 compute-0 brave_chebyshev[281847]: 167 167
Nov 29 05:45:57 compute-0 systemd[1]: libpod-28ca7da140344fb5cc1b0856ea27738c834f522c35125356054e5ed4bcb74321.scope: Deactivated successfully.
Nov 29 05:45:58 compute-0 podman[281807]: 2025-11-29 05:45:58.010548626 +0000 UTC m=+0.328559726 container attach 28ca7da140344fb5cc1b0856ea27738c834f522c35125356054e5ed4bcb74321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:45:58 compute-0 podman[281807]: 2025-11-29 05:45:58.010885994 +0000 UTC m=+0.328897064 container died 28ca7da140344fb5cc1b0856ea27738c834f522c35125356054e5ed4bcb74321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14745 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0400ee2710460c443f72febd32cfb72f9b89c919db750e0aa68ed432ff57d235-merged.mount: Deactivated successfully.
Nov 29 05:45:58 compute-0 podman[281807]: 2025-11-29 05:45:58.047430663 +0000 UTC m=+0.365441743 container remove 28ca7da140344fb5cc1b0856ea27738c834f522c35125356054e5ed4bcb74321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 05:45:58 compute-0 systemd[1]: libpod-conmon-28ca7da140344fb5cc1b0856ea27738c834f522c35125356054e5ed4bcb74321.scope: Deactivated successfully.
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:58 compute-0 podman[281882]: 2025-11-29 05:45:58.17640189 +0000 UTC m=+0.022067285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:45:58 compute-0 podman[281882]: 2025-11-29 05:45:58.314316521 +0000 UTC m=+0.159981886 container create a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14747 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:45:58 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:45:58 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2412843339' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 29 05:45:58 compute-0 ceph-mon[75176]: from='client.14745 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:58 compute-0 ceph-mon[75176]: pgmap v1278: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:45:58 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 05:45:58 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474353515' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 05:45:59 compute-0 rsyslogd[1003]: imjournal: 18514 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 05:45:59 compute-0 systemd[1]: Started libpod-conmon-a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee.scope.
Nov 29 05:45:59 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 29 05:45:59 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2437090990' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 05:45:59 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3597fa7e0b110710fd11032b35278258a0032016422dc60f4dcf3171420b49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3597fa7e0b110710fd11032b35278258a0032016422dc60f4dcf3171420b49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3597fa7e0b110710fd11032b35278258a0032016422dc60f4dcf3171420b49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3597fa7e0b110710fd11032b35278258a0032016422dc60f4dcf3171420b49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:45:59 compute-0 podman[281882]: 2025-11-29 05:45:59.379517438 +0000 UTC m=+1.225182833 container init a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rhodes, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:45:59 compute-0 podman[281882]: 2025-11-29 05:45:59.387478267 +0000 UTC m=+1.233143632 container start a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:45:59 compute-0 podman[281882]: 2025-11-29 05:45:59.391220306 +0000 UTC m=+1.236885671 container attach a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rhodes, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:45:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14753 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:59 compute-0 ceph-mon[75176]: from='client.14747 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:45:59 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2474353515' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 05:45:59 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2437090990' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 05:45:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14755 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:46:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]: {
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "osd_id": 0,
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "type": "bluestore"
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:     },
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "osd_id": 1,
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "type": "bluestore"
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:     },
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "osd_id": 2,
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:         "type": "bluestore"
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]:     }
Nov 29 05:46:00 compute-0 laughing_rhodes[281977]: }
Nov 29 05:46:00 compute-0 systemd[1]: libpod-a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee.scope: Deactivated successfully.
Nov 29 05:46:00 compute-0 podman[281882]: 2025-11-29 05:46:00.335698412 +0000 UTC m=+2.181363767 container died a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 05:46:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 05:46:00 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/973467258' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:46:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc3597fa7e0b110710fd11032b35278258a0032016422dc60f4dcf3171420b49-merged.mount: Deactivated successfully.
Nov 29 05:46:00 compute-0 ceph-mon[75176]: from='client.14753 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:46:00 compute-0 ceph-mon[75176]: from='client.14755 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:46:00 compute-0 ceph-mon[75176]: pgmap v1279: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:00 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/973467258' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 05:46:00 compute-0 podman[281882]: 2025-11-29 05:46:00.731670639 +0000 UTC m=+2.577336004 container remove a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 05:46:00 compute-0 systemd[1]: libpod-conmon-a6a6e3cab751aef8857249c0c30465fcce7f33ce0eb8149c585328d16b9704ee.scope: Deactivated successfully.
Nov 29 05:46:00 compute-0 sudo[281716]: pam_unix(sudo:session): session closed for user root
Nov 29 05:46:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:46:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:46:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:46:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 29 05:46:00 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251823936' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 05:46:00 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:46:00 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 7abb6406-2d4a-4be5-b21a-a2366540b388 does not exist
Nov 29 05:46:00 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 4d00d015-b9c1-4d85-9292-ea19bd5fae69 does not exist
Nov 29 05:46:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:00 compute-0 sudo[282182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:46:00 compute-0 sudo[282182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:46:00 compute-0 sudo[282182]: pam_unix(sudo:session): session closed for user root
Nov 29 05:46:01 compute-0 sudo[282284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:46:01 compute-0 sudo[282284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:46:01 compute-0 sudo[282284]: pam_unix(sudo:session): session closed for user root
Nov 29 05:46:01 compute-0 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 05:46:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:46:01 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/4251823936' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 05:46:01 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:46:02 compute-0 systemd[1]: Starting Time & Date Service...
Nov 29 05:46:02 compute-0 systemd[1]: Started Time & Date Service.
Nov 29 05:46:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:02 compute-0 ceph-mon[75176]: pgmap v1280: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:02 compute-0 sshd-session[282528]: Invalid user root1 from 45.120.216.232 port 55940
Nov 29 05:46:03 compute-0 sshd-session[282528]: Received disconnect from 45.120.216.232 port 55940:11: Bye Bye [preauth]
Nov 29 05:46:03 compute-0 sshd-session[282528]: Disconnected from invalid user root1 45.120.216.232 port 55940 [preauth]
Nov 29 05:46:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:05 compute-0 ceph-mon[75176]: pgmap v1281: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:06 compute-0 podman[282615]: 2025-11-29 05:46:06.047793306 +0000 UTC m=+0.084576013 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:46:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:10 compute-0 podman[282636]: 2025-11-29 05:46:10.07057445 +0000 UTC m=+0.120952738 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:46:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:10 compute-0 ceph-mon[75176]: pgmap v1282: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:10 compute-0 ceph-mon[75176]: pgmap v1283: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:10 compute-0 ceph-mon[75176]: pgmap v1284: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:46:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:46:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:46:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:46:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:46:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:46:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:12 compute-0 ceph-mon[75176]: pgmap v1285: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:46:13.760 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:46:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:46:13.761 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:46:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:46:13.761 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:46:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:46:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2737330326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:46:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:46:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2737330326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:46:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/2737330326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:46:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/2737330326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:46:15 compute-0 ceph-mon[75176]: pgmap v1286: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:16 compute-0 ceph-mon[75176]: pgmap v1287: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:18 compute-0 ceph-mon[75176]: pgmap v1288: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:20 compute-0 podman[282662]: 2025-11-29 05:46:20.012722291 +0000 UTC m=+0.055132592 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 05:46:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:22 compute-0 ceph-mon[75176]: pgmap v1289: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:22 compute-0 ceph-mon[75176]: pgmap v1290: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:24 compute-0 ceph-mon[75176]: pgmap v1291: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:26 compute-0 ceph-mon[75176]: pgmap v1292: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:28 compute-0 ceph-mon[75176]: pgmap v1293: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:30 compute-0 sudo[275046]: pam_unix(sudo:session): session closed for user root
Nov 29 05:46:30 compute-0 sshd-session[275045]: Received disconnect from 192.168.122.10 port 38330:11: disconnected by user
Nov 29 05:46:30 compute-0 sshd-session[275045]: Disconnected from user zuul 192.168.122.10 port 38330
Nov 29 05:46:30 compute-0 sshd-session[275042]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:46:30 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 29 05:46:30 compute-0 systemd[1]: session-51.scope: Consumed 2min 34.815s CPU time, 763.2M memory peak, read 280.1M from disk, written 214.8M to disk.
Nov 29 05:46:30 compute-0 systemd-logind[793]: Session 51 logged out. Waiting for processes to exit.
Nov 29 05:46:30 compute-0 systemd-logind[793]: Removed session 51.
Nov 29 05:46:30 compute-0 sshd-session[282682]: Accepted publickey for zuul from 192.168.122.10 port 42266 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:46:30 compute-0 systemd-logind[793]: New session 52 of user zuul.
Nov 29 05:46:30 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 29 05:46:30 compute-0 sshd-session[282682]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:46:30 compute-0 sudo[282686]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2025-11-29-fdtcybh.tar.xz
Nov 29 05:46:30 compute-0 sudo[282686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:46:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:30 compute-0 sudo[282686]: pam_unix(sudo:session): session closed for user root
Nov 29 05:46:30 compute-0 sshd-session[282685]: Received disconnect from 192.168.122.10 port 42266:11: disconnected by user
Nov 29 05:46:30 compute-0 sshd-session[282685]: Disconnected from user zuul 192.168.122.10 port 42266
Nov 29 05:46:30 compute-0 sshd-session[282682]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:46:30 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Nov 29 05:46:30 compute-0 systemd-logind[793]: Session 52 logged out. Waiting for processes to exit.
Nov 29 05:46:30 compute-0 systemd-logind[793]: Removed session 52.
Nov 29 05:46:30 compute-0 sshd-session[282711]: Accepted publickey for zuul from 192.168.122.10 port 42272 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:46:30 compute-0 systemd-logind[793]: New session 53 of user zuul.
Nov 29 05:46:30 compute-0 systemd[1]: Started Session 53 of User zuul.
Nov 29 05:46:30 compute-0 sshd-session[282711]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:46:30 compute-0 ceph-mon[75176]: pgmap v1294: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:30 compute-0 sudo[282715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Nov 29 05:46:30 compute-0 sudo[282715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:46:30 compute-0 sudo[282715]: pam_unix(sudo:session): session closed for user root
Nov 29 05:46:30 compute-0 sshd-session[282714]: Received disconnect from 192.168.122.10 port 42272:11: disconnected by user
Nov 29 05:46:30 compute-0 sshd-session[282714]: Disconnected from user zuul 192.168.122.10 port 42272
Nov 29 05:46:30 compute-0 sshd-session[282711]: pam_unix(sshd:session): session closed for user zuul
Nov 29 05:46:30 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Nov 29 05:46:30 compute-0 systemd-logind[793]: Session 53 logged out. Waiting for processes to exit.
Nov 29 05:46:30 compute-0 systemd-logind[793]: Removed session 53.
Nov 29 05:46:32 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 05:46:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:32 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 05:46:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:33 compute-0 ceph-mon[75176]: pgmap v1295: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:35 compute-0 ceph-mon[75176]: pgmap v1296: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:36 compute-0 sshd-session[282744]: Invalid user mc from 45.78.219.87 port 56072
Nov 29 05:46:36 compute-0 podman[282746]: 2025-11-29 05:46:36.548123268 +0000 UTC m=+0.091811436 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 05:46:36 compute-0 sshd-session[282744]: Received disconnect from 45.78.219.87 port 56072:11: Bye Bye [preauth]
Nov 29 05:46:36 compute-0 sshd-session[282744]: Disconnected from invalid user mc 45.78.219.87 port 56072 [preauth]
Nov 29 05:46:36 compute-0 ceph-mon[75176]: pgmap v1297: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:38 compute-0 ceph-mon[75176]: pgmap v1298: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:39 compute-0 nova_compute[254898]: 2025-11-29 05:46:39.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:46:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:40 compute-0 ceph-mon[75176]: pgmap v1299: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:40 compute-0 nova_compute[254898]: 2025-11-29 05:46:40.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:46:41 compute-0 podman[282767]: 2025-11-29 05:46:41.071728396 +0000 UTC m=+0.120051057 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:46:41
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.meta']
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:46:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:46:41 compute-0 nova_compute[254898]: 2025-11-29 05:46:41.950 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:46:41 compute-0 nova_compute[254898]: 2025-11-29 05:46:41.965 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:46:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:46:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:46:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:42 compute-0 ceph-mon[75176]: pgmap v1300: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:42 compute-0 nova_compute[254898]: 2025-11-29 05:46:42.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:46:42 compute-0 nova_compute[254898]: 2025-11-29 05:46:42.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:46:42 compute-0 nova_compute[254898]: 2025-11-29 05:46:42.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:46:42 compute-0 nova_compute[254898]: 2025-11-29 05:46:42.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:46:42 compute-0 nova_compute[254898]: 2025-11-29 05:46:42.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.059 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.059 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.059 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.059 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.060 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:46:43 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:46:43 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681471638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.501 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:46:43 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3681471638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.659 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.660 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4972MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.660 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.661 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.891 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.892 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:46:43 compute-0 nova_compute[254898]: 2025-11-29 05:46:43.910 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:46:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:44 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:46:44 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606435426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:46:44 compute-0 nova_compute[254898]: 2025-11-29 05:46:44.306 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:46:44 compute-0 nova_compute[254898]: 2025-11-29 05:46:44.310 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:46:44 compute-0 nova_compute[254898]: 2025-11-29 05:46:44.343 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:46:44 compute-0 nova_compute[254898]: 2025-11-29 05:46:44.344 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:46:44 compute-0 nova_compute[254898]: 2025-11-29 05:46:44.345 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:46:44 compute-0 ceph-mon[75176]: pgmap v1301: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:44 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2606435426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:46:45 compute-0 sshd-session[282837]: Invalid user cc from 152.32.145.111 port 35286
Nov 29 05:46:45 compute-0 sshd-session[282837]: Received disconnect from 152.32.145.111 port 35286:11: Bye Bye [preauth]
Nov 29 05:46:45 compute-0 sshd-session[282837]: Disconnected from invalid user cc 152.32.145.111 port 35286 [preauth]
Nov 29 05:46:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:46 compute-0 ceph-mon[75176]: pgmap v1302: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:49 compute-0 ceph-mon[75176]: pgmap v1303: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:50 compute-0 nova_compute[254898]: 2025-11-29 05:46:50.340 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:46:50 compute-0 nova_compute[254898]: 2025-11-29 05:46:50.341 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:46:50 compute-0 nova_compute[254898]: 2025-11-29 05:46:50.341 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:46:50 compute-0 nova_compute[254898]: 2025-11-29 05:46:50.341 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:46:50 compute-0 ceph-mon[75176]: pgmap v1304: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:51 compute-0 podman[282839]: 2025-11-29 05:46:51.026092236 +0000 UTC m=+0.072202228 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:46:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:46:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:52 compute-0 nova_compute[254898]: 2025-11-29 05:46:52.426 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:46:52 compute-0 ceph-mon[75176]: pgmap v1305: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:54 compute-0 ceph-mon[75176]: pgmap v1306: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:56 compute-0 ceph-mon[75176]: pgmap v1307: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:46:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:46:59 compute-0 ceph-mon[75176]: pgmap v1308: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:01 compute-0 anacron[34133]: Job `cron.weekly' started
Nov 29 05:47:01 compute-0 anacron[34133]: Job `cron.weekly' terminated
Nov 29 05:47:01 compute-0 sudo[282859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:47:01 compute-0 sudo[282859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:01 compute-0 sudo[282859]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:01 compute-0 sudo[282886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:47:01 compute-0 sudo[282886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:01 compute-0 sudo[282886]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:01 compute-0 sudo[282911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:47:01 compute-0 sudo[282911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:01 compute-0 sudo[282911]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:01 compute-0 ceph-mon[75176]: pgmap v1309: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:01 compute-0 sudo[282936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:47:01 compute-0 sudo[282936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:01 compute-0 sudo[282936]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:47:01 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:47:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:47:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:47:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:47:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:47:01 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 94938caa-906b-4e1e-926e-7bfe26b9392c does not exist
Nov 29 05:47:01 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 0cc35450-1361-4d0c-9efb-2ef5df8de886 does not exist
Nov 29 05:47:01 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev fc1eb606-2a80-427c-b923-0e6796199957 does not exist
Nov 29 05:47:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:47:01 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:47:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:47:01 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:47:01 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:47:01 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:47:01 compute-0 sudo[282993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:47:01 compute-0 sudo[282993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:01 compute-0 sudo[282993]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:01 compute-0 sudo[283018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:47:01 compute-0 sudo[283018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:01 compute-0 sudo[283018]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:01 compute-0 sudo[283043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:47:01 compute-0 sudo[283043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:01 compute-0 sudo[283043]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:02 compute-0 sudo[283068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:47:02 compute-0 sudo[283068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:47:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:47:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:47:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:47:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:47:02 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:47:02 compute-0 podman[283133]: 2025-11-29 05:47:02.376253748 +0000 UTC m=+0.062666862 container create 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 05:47:02 compute-0 systemd[1]: Started libpod-conmon-2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2.scope.
Nov 29 05:47:02 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:47:02 compute-0 podman[283133]: 2025-11-29 05:47:02.340355534 +0000 UTC m=+0.026768658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:47:02 compute-0 podman[283133]: 2025-11-29 05:47:02.551015595 +0000 UTC m=+0.237428709 container init 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:47:02 compute-0 podman[283133]: 2025-11-29 05:47:02.55840664 +0000 UTC m=+0.244819754 container start 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:47:02 compute-0 fervent_turing[283149]: 167 167
Nov 29 05:47:02 compute-0 systemd[1]: libpod-2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2.scope: Deactivated successfully.
Nov 29 05:47:02 compute-0 podman[283133]: 2025-11-29 05:47:02.572797452 +0000 UTC m=+0.259210566 container attach 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:47:02 compute-0 podman[283133]: 2025-11-29 05:47:02.573370707 +0000 UTC m=+0.259783811 container died 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 05:47:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-696ce470ec34c27bc473c63fcb478980495bd1bd7c8f6a8a9da86893f30e877c-merged.mount: Deactivated successfully.
Nov 29 05:47:02 compute-0 podman[283133]: 2025-11-29 05:47:02.663093611 +0000 UTC m=+0.349506725 container remove 2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_turing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 05:47:02 compute-0 systemd[1]: libpod-conmon-2d5d19ac68123698c78e189e265ae7c3c2cd290d5b962f17a6edd7c3371c64b2.scope: Deactivated successfully.
Nov 29 05:47:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:02 compute-0 podman[283177]: 2025-11-29 05:47:02.793088023 +0000 UTC m=+0.021153985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:47:02 compute-0 podman[283177]: 2025-11-29 05:47:02.924030947 +0000 UTC m=+0.152096929 container create 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:47:03 compute-0 systemd[1]: Started libpod-conmon-647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb.scope.
Nov 29 05:47:03 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:03 compute-0 podman[283177]: 2025-11-29 05:47:03.069826175 +0000 UTC m=+0.297892197 container init 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:47:03 compute-0 podman[283177]: 2025-11-29 05:47:03.078122932 +0000 UTC m=+0.306188874 container start 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:47:03 compute-0 podman[283177]: 2025-11-29 05:47:03.094479231 +0000 UTC m=+0.322545213 container attach 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 05:47:03 compute-0 ceph-mon[75176]: pgmap v1310: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:04 compute-0 adoring_euclid[283194]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:47:04 compute-0 adoring_euclid[283194]: --> relative data size: 1.0
Nov 29 05:47:04 compute-0 adoring_euclid[283194]: --> All data devices are unavailable
Nov 29 05:47:04 compute-0 systemd[1]: libpod-647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb.scope: Deactivated successfully.
Nov 29 05:47:04 compute-0 podman[283177]: 2025-11-29 05:47:04.241101085 +0000 UTC m=+1.469167067 container died 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:47:04 compute-0 systemd[1]: libpod-647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb.scope: Consumed 1.097s CPU time.
Nov 29 05:47:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:04 compute-0 ceph-mon[75176]: pgmap v1311: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec4c3af37fa440764740b416afb5ff747bd0131d380f0ad62345be476c3f2bd9-merged.mount: Deactivated successfully.
Nov 29 05:47:04 compute-0 podman[283177]: 2025-11-29 05:47:04.548398753 +0000 UTC m=+1.776464685 container remove 647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 05:47:04 compute-0 systemd[1]: libpod-conmon-647677a259b636e184769744ff72379889e743462b4421b8688a52b5f3e6b4bb.scope: Deactivated successfully.
Nov 29 05:47:04 compute-0 sudo[283068]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:04 compute-0 sudo[283238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:47:04 compute-0 sudo[283238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:04 compute-0 sudo[283238]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:04 compute-0 sudo[283263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:47:04 compute-0 sudo[283263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:04 compute-0 sudo[283263]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:04 compute-0 sudo[283288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:47:04 compute-0 sudo[283288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:04 compute-0 sudo[283288]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:04 compute-0 sudo[283313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:47:04 compute-0 sudo[283313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:05 compute-0 podman[283380]: 2025-11-29 05:47:05.100756712 +0000 UTC m=+0.023410488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:47:05 compute-0 podman[283380]: 2025-11-29 05:47:05.27516128 +0000 UTC m=+0.197815036 container create e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:47:05 compute-0 systemd[1]: Started libpod-conmon-e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2.scope.
Nov 29 05:47:05 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:47:05 compute-0 podman[283380]: 2025-11-29 05:47:05.437188664 +0000 UTC m=+0.359842430 container init e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:47:05 compute-0 podman[283380]: 2025-11-29 05:47:05.444323064 +0000 UTC m=+0.366976820 container start e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:47:05 compute-0 inspiring_moser[283396]: 167 167
Nov 29 05:47:05 compute-0 systemd[1]: libpod-e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2.scope: Deactivated successfully.
Nov 29 05:47:05 compute-0 podman[283380]: 2025-11-29 05:47:05.572829541 +0000 UTC m=+0.495483297 container attach e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 05:47:05 compute-0 podman[283380]: 2025-11-29 05:47:05.573420755 +0000 UTC m=+0.496074511 container died e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 05:47:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-18675f06305cb69f3e5a3793039716cc6c5b07bf585b97f2f450980426f9cb3b-merged.mount: Deactivated successfully.
Nov 29 05:47:05 compute-0 podman[283380]: 2025-11-29 05:47:05.7401615 +0000 UTC m=+0.662815296 container remove e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_moser, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:47:05 compute-0 systemd[1]: libpod-conmon-e61f27bba2abee37438fd3d964e31514fed8229f425baa837eefc3d224251af2.scope: Deactivated successfully.
Nov 29 05:47:05 compute-0 podman[283420]: 2025-11-29 05:47:05.877040456 +0000 UTC m=+0.027673749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:47:06 compute-0 podman[283420]: 2025-11-29 05:47:06.060742336 +0000 UTC m=+0.211375599 container create 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:47:06 compute-0 systemd[1]: Started libpod-conmon-91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c.scope.
Nov 29 05:47:06 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66dbc95ecca03b839dcff5ff60c7bb46a2423802a273d9ab531181c693ba3a42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66dbc95ecca03b839dcff5ff60c7bb46a2423802a273d9ab531181c693ba3a42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66dbc95ecca03b839dcff5ff60c7bb46a2423802a273d9ab531181c693ba3a42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66dbc95ecca03b839dcff5ff60c7bb46a2423802a273d9ab531181c693ba3a42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:06 compute-0 podman[283420]: 2025-11-29 05:47:06.436847562 +0000 UTC m=+0.587480855 container init 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 05:47:06 compute-0 podman[283420]: 2025-11-29 05:47:06.445688912 +0000 UTC m=+0.596322175 container start 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 05:47:06 compute-0 podman[283420]: 2025-11-29 05:47:06.463010275 +0000 UTC m=+0.613643548 container attach 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:47:06 compute-0 ceph-mon[75176]: pgmap v1312: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:07 compute-0 podman[283443]: 2025-11-29 05:47:07.015310281 +0000 UTC m=+0.068891269 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 05:47:07 compute-0 magical_tharp[283438]: {
Nov 29 05:47:07 compute-0 magical_tharp[283438]:     "0": [
Nov 29 05:47:07 compute-0 magical_tharp[283438]:         {
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "devices": [
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "/dev/loop3"
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             ],
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_name": "ceph_lv0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_size": "21470642176",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "name": "ceph_lv0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "tags": {
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.cluster_name": "ceph",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.crush_device_class": "",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.encrypted": "0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.osd_id": "0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.type": "block",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.vdo": "0"
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             },
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "type": "block",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "vg_name": "ceph_vg0"
Nov 29 05:47:07 compute-0 magical_tharp[283438]:         }
Nov 29 05:47:07 compute-0 magical_tharp[283438]:     ],
Nov 29 05:47:07 compute-0 magical_tharp[283438]:     "1": [
Nov 29 05:47:07 compute-0 magical_tharp[283438]:         {
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "devices": [
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "/dev/loop4"
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             ],
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_name": "ceph_lv1",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_size": "21470642176",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "name": "ceph_lv1",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "tags": {
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.cluster_name": "ceph",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.crush_device_class": "",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.encrypted": "0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.osd_id": "1",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.type": "block",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.vdo": "0"
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             },
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "type": "block",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "vg_name": "ceph_vg1"
Nov 29 05:47:07 compute-0 magical_tharp[283438]:         }
Nov 29 05:47:07 compute-0 magical_tharp[283438]:     ],
Nov 29 05:47:07 compute-0 magical_tharp[283438]:     "2": [
Nov 29 05:47:07 compute-0 magical_tharp[283438]:         {
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "devices": [
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "/dev/loop5"
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             ],
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_name": "ceph_lv2",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_size": "21470642176",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "name": "ceph_lv2",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "tags": {
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.cluster_name": "ceph",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.crush_device_class": "",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.encrypted": "0",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.osd_id": "2",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.type": "block",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:                 "ceph.vdo": "0"
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             },
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "type": "block",
Nov 29 05:47:07 compute-0 magical_tharp[283438]:             "vg_name": "ceph_vg2"
Nov 29 05:47:07 compute-0 magical_tharp[283438]:         }
Nov 29 05:47:07 compute-0 magical_tharp[283438]:     ]
Nov 29 05:47:07 compute-0 magical_tharp[283438]: }
Nov 29 05:47:07 compute-0 systemd[1]: libpod-91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c.scope: Deactivated successfully.
Nov 29 05:47:07 compute-0 podman[283420]: 2025-11-29 05:47:07.294242576 +0000 UTC m=+1.444875839 container died 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:47:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-66dbc95ecca03b839dcff5ff60c7bb46a2423802a273d9ab531181c693ba3a42-merged.mount: Deactivated successfully.
Nov 29 05:47:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:07 compute-0 podman[283420]: 2025-11-29 05:47:07.783733979 +0000 UTC m=+1.934367242 container remove 91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:47:07 compute-0 systemd[1]: libpod-conmon-91e770eab0f865dec8b3620c3c01d7436d741210f5a2fd8af78b19573dbbc14c.scope: Deactivated successfully.
Nov 29 05:47:07 compute-0 sudo[283313]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:07 compute-0 sudo[283482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:47:07 compute-0 sudo[283482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:07 compute-0 sudo[283482]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:08 compute-0 sudo[283507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:47:08 compute-0 sudo[283507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:08 compute-0 sudo[283507]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:08 compute-0 sudo[283532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:47:08 compute-0 sudo[283532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:08 compute-0 sudo[283532]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:08 compute-0 sudo[283557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:47:08 compute-0 sudo[283557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:08 compute-0 ceph-mon[75176]: pgmap v1313: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:08 compute-0 podman[283623]: 2025-11-29 05:47:08.592520057 +0000 UTC m=+0.108473842 container create f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:47:08 compute-0 podman[283623]: 2025-11-29 05:47:08.507253039 +0000 UTC m=+0.023206844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:47:08 compute-0 systemd[1]: Started libpod-conmon-f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230.scope.
Nov 29 05:47:08 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:47:08 compute-0 podman[283623]: 2025-11-29 05:47:08.701630872 +0000 UTC m=+0.217584667 container init f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 05:47:08 compute-0 podman[283623]: 2025-11-29 05:47:08.709708164 +0000 UTC m=+0.225661949 container start f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:47:08 compute-0 podman[283623]: 2025-11-29 05:47:08.713426432 +0000 UTC m=+0.229380217 container attach f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:47:08 compute-0 elastic_lalande[283640]: 167 167
Nov 29 05:47:08 compute-0 systemd[1]: libpod-f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230.scope: Deactivated successfully.
Nov 29 05:47:08 compute-0 podman[283645]: 2025-11-29 05:47:08.75243643 +0000 UTC m=+0.022635189 container died f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:47:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ab848df72037c439cac36798045516ae79cbace7408e6d72832432827cc4be1-merged.mount: Deactivated successfully.
Nov 29 05:47:08 compute-0 podman[283645]: 2025-11-29 05:47:08.796050228 +0000 UTC m=+0.066248977 container remove f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:47:08 compute-0 systemd[1]: libpod-conmon-f6d03da93be9e2004c45ea4e378df8d0707d25b7eaed69b80c3d8676d051d230.scope: Deactivated successfully.
Nov 29 05:47:09 compute-0 podman[283667]: 2025-11-29 05:47:08.953652807 +0000 UTC m=+0.026268586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:47:09 compute-0 podman[283667]: 2025-11-29 05:47:09.722686858 +0000 UTC m=+0.795302607 container create 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:47:09 compute-0 systemd[1]: Started libpod-conmon-03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a.scope.
Nov 29 05:47:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684ce6d3fdd2be367408a0a0b58c22ab651594902dcfe9e332ecdb11dfc4778c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684ce6d3fdd2be367408a0a0b58c22ab651594902dcfe9e332ecdb11dfc4778c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684ce6d3fdd2be367408a0a0b58c22ab651594902dcfe9e332ecdb11dfc4778c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/684ce6d3fdd2be367408a0a0b58c22ab651594902dcfe9e332ecdb11dfc4778c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:47:09 compute-0 podman[283667]: 2025-11-29 05:47:09.824033959 +0000 UTC m=+0.896649708 container init 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:47:09 compute-0 podman[283667]: 2025-11-29 05:47:09.833026863 +0000 UTC m=+0.905642612 container start 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:47:09 compute-0 podman[283667]: 2025-11-29 05:47:09.837717614 +0000 UTC m=+0.910333383 container attach 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 05:47:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:10 compute-0 clever_fermat[283684]: {
Nov 29 05:47:10 compute-0 clever_fermat[283684]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "osd_id": 0,
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "type": "bluestore"
Nov 29 05:47:10 compute-0 clever_fermat[283684]:     },
Nov 29 05:47:10 compute-0 clever_fermat[283684]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "osd_id": 1,
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "type": "bluestore"
Nov 29 05:47:10 compute-0 clever_fermat[283684]:     },
Nov 29 05:47:10 compute-0 clever_fermat[283684]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "osd_id": 2,
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:47:10 compute-0 clever_fermat[283684]:         "type": "bluestore"
Nov 29 05:47:10 compute-0 clever_fermat[283684]:     }
Nov 29 05:47:10 compute-0 clever_fermat[283684]: }
Nov 29 05:47:10 compute-0 systemd[1]: libpod-03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a.scope: Deactivated successfully.
Nov 29 05:47:10 compute-0 podman[283667]: 2025-11-29 05:47:10.762397948 +0000 UTC m=+1.835013727 container died 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:47:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-684ce6d3fdd2be367408a0a0b58c22ab651594902dcfe9e332ecdb11dfc4778c-merged.mount: Deactivated successfully.
Nov 29 05:47:11 compute-0 podman[283667]: 2025-11-29 05:47:11.245601801 +0000 UTC m=+2.318217550 container remove 03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:47:11 compute-0 sudo[283557]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:47:11 compute-0 systemd[1]: libpod-conmon-03aed23a08ed5d0c8453029dee9a1a9c4d1eb26ab2d9dc4b6908393e2cf7452a.scope: Deactivated successfully.
Nov 29 05:47:11 compute-0 podman[283729]: 2025-11-29 05:47:11.340248933 +0000 UTC m=+0.150431459 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:47:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:47:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:47:11 compute-0 ceph-mon[75176]: pgmap v1314: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:11 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:47:11 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 1726d3a1-7af1-46d0-bf32-c98c8262fda3 does not exist
Nov 29 05:47:11 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 43d31791-ec59-41df-bfb2-4ae21d53b348 does not exist
Nov 29 05:47:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:47:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:47:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:47:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:47:11 compute-0 sudo[283753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:47:11 compute-0 sudo[283753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:11 compute-0 sudo[283753]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:11 compute-0 sudo[283778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:47:11 compute-0 sudo[283778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:47:11 compute-0 sudo[283778]: pam_unix(sudo:session): session closed for user root
Nov 29 05:47:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:47:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:47:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:12 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:47:12 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:47:12 compute-0 ceph-mon[75176]: pgmap v1315: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:47:13.761 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:47:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:47:13.762 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:47:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:47:13.762 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:47:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:47:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/959059243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:47:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:47:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/959059243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:47:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/959059243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:47:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/959059243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:47:15 compute-0 ceph-mon[75176]: pgmap v1316: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:16 compute-0 ceph-mon[75176]: pgmap v1317: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:18 compute-0 sshd-session[283803]: Invalid user admin1 from 45.120.216.232 port 54832
Nov 29 05:47:18 compute-0 sshd-session[283803]: Received disconnect from 45.120.216.232 port 54832:11: Bye Bye [preauth]
Nov 29 05:47:18 compute-0 sshd-session[283803]: Disconnected from invalid user admin1 45.120.216.232 port 54832 [preauth]
Nov 29 05:47:19 compute-0 ceph-mon[75176]: pgmap v1318: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:20 compute-0 ceph-mon[75176]: pgmap v1319: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:21 compute-0 podman[283805]: 2025-11-29 05:47:21.999054369 +0000 UTC m=+0.049571720 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 05:47:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:22 compute-0 ceph-mon[75176]: pgmap v1320: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:25 compute-0 ceph-mon[75176]: pgmap v1321: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:26 compute-0 sshd-session[283825]: Invalid user asterisk from 45.78.219.216 port 53000
Nov 29 05:47:26 compute-0 sshd-session[283825]: Received disconnect from 45.78.219.216 port 53000:11: Bye Bye [preauth]
Nov 29 05:47:26 compute-0 sshd-session[283825]: Disconnected from invalid user asterisk 45.78.219.216 port 53000 [preauth]
Nov 29 05:47:27 compute-0 ceph-mon[75176]: pgmap v1322: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:28 compute-0 ceph-mon[75176]: pgmap v1323: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:31 compute-0 ceph-mon[75176]: pgmap v1324: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:32 compute-0 ceph-mon[75176]: pgmap v1325: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:34 compute-0 ceph-mon[75176]: pgmap v1326: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:38 compute-0 podman[283828]: 2025-11-29 05:47:38.044747577 +0000 UTC m=+0.082531304 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 29 05:47:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:40 compute-0 sshd-session[283827]: error: kex_exchange_identification: read: Connection timed out
Nov 29 05:47:40 compute-0 sshd-session[283827]: banner exchange: Connection from 14.103.242.177 port 48380: Connection timed out
Nov 29 05:47:40 compute-0 nova_compute[254898]: 2025-11-29 05:47:40.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:47:41
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['images', 'default.rgw.log', '.mgr', 'vms', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:47:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:47:41 compute-0 nova_compute[254898]: 2025-11-29 05:47:41.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:47:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:47:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:47:42 compute-0 podman[283850]: 2025-11-29 05:47:42.057752758 +0000 UTC m=+0.100739117 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 05:47:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:43 compute-0 nova_compute[254898]: 2025-11-29 05:47:43.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:47:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:44 compute-0 ceph-mon[75176]: pgmap v1327: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:44 compute-0 nova_compute[254898]: 2025-11-29 05:47:44.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:47:44 compute-0 nova_compute[254898]: 2025-11-29 05:47:44.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:47:44 compute-0 nova_compute[254898]: 2025-11-29 05:47:44.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:47:44 compute-0 nova_compute[254898]: 2025-11-29 05:47:44.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:47:44 compute-0 nova_compute[254898]: 2025-11-29 05:47:44.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.021 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.022 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.022 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.022 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.022 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:47:45 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:47:45 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1422408111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.489 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:47:45 compute-0 ceph-mon[75176]: pgmap v1328: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:45 compute-0 ceph-mon[75176]: pgmap v1329: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:45 compute-0 ceph-mon[75176]: pgmap v1330: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:45 compute-0 ceph-mon[75176]: pgmap v1331: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:45 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1422408111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.629 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.630 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4959MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.631 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.631 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.860 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.860 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:47:45 compute-0 nova_compute[254898]: 2025-11-29 05:47:45.879 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:47:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:47:46 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/936850596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:47:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:46 compute-0 nova_compute[254898]: 2025-11-29 05:47:46.273 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:47:46 compute-0 nova_compute[254898]: 2025-11-29 05:47:46.278 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:47:46 compute-0 nova_compute[254898]: 2025-11-29 05:47:46.378 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:47:46 compute-0 nova_compute[254898]: 2025-11-29 05:47:46.382 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:47:46 compute-0 nova_compute[254898]: 2025-11-29 05:47:46.383 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:47:46 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/936850596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:47:46 compute-0 ceph-mon[75176]: pgmap v1332: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:48 compute-0 ceph-mon[75176]: pgmap v1333: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:50 compute-0 nova_compute[254898]: 2025-11-29 05:47:50.380 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:47:50 compute-0 nova_compute[254898]: 2025-11-29 05:47:50.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:47:50 compute-0 nova_compute[254898]: 2025-11-29 05:47:50.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:47:50 compute-0 nova_compute[254898]: 2025-11-29 05:47:50.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:47:51 compute-0 nova_compute[254898]: 2025-11-29 05:47:51.177 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:47:51 compute-0 ceph-mon[75176]: pgmap v1334: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:47:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:47:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:52 compute-0 ceph-mon[75176]: pgmap v1335: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:52 compute-0 podman[283921]: 2025-11-29 05:47:52.998482522 +0000 UTC m=+0.050037312 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 05:47:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:47:54 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6526 writes, 30K keys, 6526 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6526 writes, 6526 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1715 writes, 8415 keys, 1715 commit groups, 1.0 writes per commit group, ingest: 10.60 MB, 0.02 MB/s
                                           Interval WAL: 1715 writes, 1715 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    110.5      0.30              0.13        16    0.019       0      0       0.0       0.0
                                             L6      1/0    8.41 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    151.1    123.9      0.92              0.42        15    0.061     72K   8391       0.0       0.0
                                            Sum      1/0    8.41 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4    113.7    120.6      1.22              0.56        31    0.039     72K   8391       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1    125.0    127.9      0.35              0.16         8    0.044     24K   2605       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    151.1    123.9      0.92              0.42        15    0.061     72K   8391       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    111.1      0.30              0.13        15    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.2      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.033, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.14 GB read, 0.06 MB/s read, 1.2 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556a62a271f0#2 capacity: 304.00 MB usage: 16.09 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000234 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1249,15.50 MB,5.09985%) FilterBlock(32,213.36 KB,0.0685391%) IndexBlock(32,386.75 KB,0.124239%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 29 05:47:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:55 compute-0 ceph-mon[75176]: pgmap v1336: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:56 compute-0 ceph-mon[75176]: pgmap v1337: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:47:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:47:58 compute-0 ceph-mon[75176]: pgmap v1338: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:00 compute-0 ceph-mon[75176]: pgmap v1339: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:02 compute-0 ceph-mon[75176]: pgmap v1340: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:04 compute-0 ceph-mon[75176]: pgmap v1341: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:06 compute-0 ceph-mon[75176]: pgmap v1342: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:08 compute-0 ceph-mon[75176]: pgmap v1343: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:09 compute-0 podman[283941]: 2025-11-29 05:48:09.079457899 +0000 UTC m=+0.132910872 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 05:48:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:10 compute-0 ceph-mon[75176]: pgmap v1344: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:48:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:48:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:48:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:48:11 compute-0 sudo[283961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:48:11 compute-0 sudo[283961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:11 compute-0 sudo[283961]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:11 compute-0 sudo[283986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:48:11 compute-0 sudo[283986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:11 compute-0 sudo[283986]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:11 compute-0 sudo[284011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:48:11 compute-0 sudo[284011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:11 compute-0 sudo[284011]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:11 compute-0 sudo[284036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:48:11 compute-0 sudo[284036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:48:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:48:12 compute-0 sudo[284036]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:48:12 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:48:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:48:12 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:48:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:48:12 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:48:12 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev cbec915b-f65a-46c2-b015-dff53fe8606b does not exist
Nov 29 05:48:12 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a7c94ba6-f09e-4eca-b63a-1832bddd4398 does not exist
Nov 29 05:48:12 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev b7218b4b-bba6-41f2-a5d7-ef694b233edc does not exist
Nov 29 05:48:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:48:12 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:48:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:48:12 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:48:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:48:12 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:48:12 compute-0 ceph-mon[75176]: pgmap v1345: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:12 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:48:12 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:48:12 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:48:12 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:48:12 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:48:12 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:48:12 compute-0 sudo[284092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:48:12 compute-0 sudo[284092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:12 compute-0 sudo[284092]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:12 compute-0 sudo[284123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:48:12 compute-0 sudo[284123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:12 compute-0 sudo[284123]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:12 compute-0 podman[284116]: 2025-11-29 05:48:12.515097418 +0000 UTC m=+0.085894824 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 05:48:12 compute-0 sudo[284166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:48:12 compute-0 sudo[284166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:12 compute-0 sudo[284166]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:12 compute-0 sudo[284193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:48:12 compute-0 sudo[284193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:13 compute-0 podman[284258]: 2025-11-29 05:48:13.000375431 +0000 UTC m=+0.058571405 container create 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:48:13 compute-0 systemd[1]: Started libpod-conmon-15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657.scope.
Nov 29 05:48:13 compute-0 podman[284258]: 2025-11-29 05:48:12.966896814 +0000 UTC m=+0.025092838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:48:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:48:13 compute-0 podman[284258]: 2025-11-29 05:48:13.113121132 +0000 UTC m=+0.171317176 container init 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 29 05:48:13 compute-0 podman[284258]: 2025-11-29 05:48:13.12144057 +0000 UTC m=+0.179636504 container start 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:48:13 compute-0 podman[284258]: 2025-11-29 05:48:13.124910723 +0000 UTC m=+0.183106767 container attach 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 05:48:13 compute-0 boring_mestorf[284274]: 167 167
Nov 29 05:48:13 compute-0 systemd[1]: libpod-15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657.scope: Deactivated successfully.
Nov 29 05:48:13 compute-0 podman[284258]: 2025-11-29 05:48:13.130119266 +0000 UTC m=+0.188315220 container died 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 05:48:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-79e95849fc2762a83b4688c5f478fcc9c880ce052bbb62c47fd184d5346a78a6-merged.mount: Deactivated successfully.
Nov 29 05:48:13 compute-0 podman[284258]: 2025-11-29 05:48:13.181000707 +0000 UTC m=+0.239196661 container remove 15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 05:48:13 compute-0 systemd[1]: libpod-conmon-15a8950624654a39cd5fb0395157d046033aabca094d42f11400fcc2247f5657.scope: Deactivated successfully.
Nov 29 05:48:13 compute-0 podman[284299]: 2025-11-29 05:48:13.409217595 +0000 UTC m=+0.042070912 container create ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:48:13 compute-0 systemd[1]: Started libpod-conmon-ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d.scope.
Nov 29 05:48:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:13 compute-0 podman[284299]: 2025-11-29 05:48:13.393609953 +0000 UTC m=+0.026463290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:13 compute-0 podman[284299]: 2025-11-29 05:48:13.501663474 +0000 UTC m=+0.134516821 container init ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:48:13 compute-0 podman[284299]: 2025-11-29 05:48:13.510060873 +0000 UTC m=+0.142914190 container start ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 05:48:13 compute-0 podman[284299]: 2025-11-29 05:48:13.513523405 +0000 UTC m=+0.146376722 container attach ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:48:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:48:13.762 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:48:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:48:13.763 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:48:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:48:13.763 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:48:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:48:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3951864936' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:48:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:48:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3951864936' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:48:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3951864936' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:48:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3951864936' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:48:14 compute-0 goofy_sutherland[284315]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:48:14 compute-0 goofy_sutherland[284315]: --> relative data size: 1.0
Nov 29 05:48:14 compute-0 goofy_sutherland[284315]: --> All data devices are unavailable
Nov 29 05:48:14 compute-0 systemd[1]: libpod-ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d.scope: Deactivated successfully.
Nov 29 05:48:14 compute-0 podman[284299]: 2025-11-29 05:48:14.501671139 +0000 UTC m=+1.134524456 container died ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:48:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-61072e10a5b25684ff19e4459b0c317df6cb58634e9a5fa021ca71fd439afb2c-merged.mount: Deactivated successfully.
Nov 29 05:48:14 compute-0 podman[284299]: 2025-11-29 05:48:14.554796643 +0000 UTC m=+1.187649960 container remove ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 05:48:14 compute-0 systemd[1]: libpod-conmon-ccd467a618135495f49ec60a5a8816fd11c18f27ba1996666eb117a6e29f566d.scope: Deactivated successfully.
Nov 29 05:48:14 compute-0 sudo[284193]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:14 compute-0 sudo[284355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:48:14 compute-0 sudo[284355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:14 compute-0 sudo[284355]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:14 compute-0 sudo[284380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:48:14 compute-0 sudo[284380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:14 compute-0 sudo[284380]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:14 compute-0 sudo[284405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:48:14 compute-0 sudo[284405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:14 compute-0 sudo[284405]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:14 compute-0 sudo[284430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:48:14 compute-0 sudo[284430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:15 compute-0 podman[284495]: 2025-11-29 05:48:15.174464312 +0000 UTC m=+0.039115512 container create b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:48:15 compute-0 systemd[1]: Started libpod-conmon-b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63.scope.
Nov 29 05:48:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:48:15 compute-0 podman[284495]: 2025-11-29 05:48:15.233422734 +0000 UTC m=+0.098073934 container init b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:48:15 compute-0 podman[284495]: 2025-11-29 05:48:15.239756985 +0000 UTC m=+0.104408195 container start b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:48:15 compute-0 podman[284495]: 2025-11-29 05:48:15.242508211 +0000 UTC m=+0.107159441 container attach b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:48:15 compute-0 naughty_hoover[284511]: 167 167
Nov 29 05:48:15 compute-0 systemd[1]: libpod-b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63.scope: Deactivated successfully.
Nov 29 05:48:15 compute-0 podman[284495]: 2025-11-29 05:48:15.158785339 +0000 UTC m=+0.023436559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:48:15 compute-0 podman[284516]: 2025-11-29 05:48:15.282177104 +0000 UTC m=+0.023012538 container died b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:48:15 compute-0 ceph-mon[75176]: pgmap v1346: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f924cb01da699a19b222a64a916d2d68450310dc6dd1580b339088eee29a5da8-merged.mount: Deactivated successfully.
Nov 29 05:48:15 compute-0 podman[284516]: 2025-11-29 05:48:15.314340049 +0000 UTC m=+0.055175483 container remove b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:48:15 compute-0 systemd[1]: libpod-conmon-b337051046daab0861fc55663fadcb332f524cbc8c40946f44dd689ead6f5f63.scope: Deactivated successfully.
Nov 29 05:48:15 compute-0 podman[284538]: 2025-11-29 05:48:15.471074647 +0000 UTC m=+0.043557267 container create fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 05:48:15 compute-0 systemd[1]: Started libpod-conmon-fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35.scope.
Nov 29 05:48:15 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd23a92e299aebb20d63a8487916be79f939f18ac2306366e77f826cc49772fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd23a92e299aebb20d63a8487916be79f939f18ac2306366e77f826cc49772fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd23a92e299aebb20d63a8487916be79f939f18ac2306366e77f826cc49772fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd23a92e299aebb20d63a8487916be79f939f18ac2306366e77f826cc49772fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:15 compute-0 podman[284538]: 2025-11-29 05:48:15.450843606 +0000 UTC m=+0.023326256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:48:15 compute-0 podman[284538]: 2025-11-29 05:48:15.55189435 +0000 UTC m=+0.124377000 container init fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:48:15 compute-0 podman[284538]: 2025-11-29 05:48:15.557163305 +0000 UTC m=+0.129645925 container start fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:48:15 compute-0 podman[284538]: 2025-11-29 05:48:15.560600317 +0000 UTC m=+0.133082937 container attach fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:48:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:16 compute-0 bold_brattain[284555]: {
Nov 29 05:48:16 compute-0 bold_brattain[284555]:     "0": [
Nov 29 05:48:16 compute-0 bold_brattain[284555]:         {
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "devices": [
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "/dev/loop3"
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             ],
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_name": "ceph_lv0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_size": "21470642176",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "name": "ceph_lv0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "tags": {
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.cluster_name": "ceph",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.crush_device_class": "",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.encrypted": "0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.osd_id": "0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.type": "block",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.vdo": "0"
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             },
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "type": "block",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "vg_name": "ceph_vg0"
Nov 29 05:48:16 compute-0 bold_brattain[284555]:         }
Nov 29 05:48:16 compute-0 bold_brattain[284555]:     ],
Nov 29 05:48:16 compute-0 bold_brattain[284555]:     "1": [
Nov 29 05:48:16 compute-0 bold_brattain[284555]:         {
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "devices": [
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "/dev/loop4"
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             ],
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_name": "ceph_lv1",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_size": "21470642176",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "name": "ceph_lv1",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "tags": {
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.cluster_name": "ceph",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.crush_device_class": "",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.encrypted": "0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.osd_id": "1",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.type": "block",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.vdo": "0"
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             },
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "type": "block",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "vg_name": "ceph_vg1"
Nov 29 05:48:16 compute-0 bold_brattain[284555]:         }
Nov 29 05:48:16 compute-0 bold_brattain[284555]:     ],
Nov 29 05:48:16 compute-0 bold_brattain[284555]:     "2": [
Nov 29 05:48:16 compute-0 bold_brattain[284555]:         {
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "devices": [
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "/dev/loop5"
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             ],
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_name": "ceph_lv2",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_size": "21470642176",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "name": "ceph_lv2",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "tags": {
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.cluster_name": "ceph",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.crush_device_class": "",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.encrypted": "0",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.osd_id": "2",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.type": "block",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:                 "ceph.vdo": "0"
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             },
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "type": "block",
Nov 29 05:48:16 compute-0 bold_brattain[284555]:             "vg_name": "ceph_vg2"
Nov 29 05:48:16 compute-0 bold_brattain[284555]:         }
Nov 29 05:48:16 compute-0 bold_brattain[284555]:     ]
Nov 29 05:48:16 compute-0 bold_brattain[284555]: }
Nov 29 05:48:16 compute-0 systemd[1]: libpod-fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35.scope: Deactivated successfully.
Nov 29 05:48:16 compute-0 podman[284538]: 2025-11-29 05:48:16.339384806 +0000 UTC m=+0.911867446 container died fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd23a92e299aebb20d63a8487916be79f939f18ac2306366e77f826cc49772fb-merged.mount: Deactivated successfully.
Nov 29 05:48:16 compute-0 podman[284538]: 2025-11-29 05:48:16.395832803 +0000 UTC m=+0.968315443 container remove fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 05:48:16 compute-0 systemd[1]: libpod-conmon-fb898befc68f70f6e714b42b314ca6f3fe97bb2159ff8fa1764122aba4451b35.scope: Deactivated successfully.
Nov 29 05:48:16 compute-0 sudo[284430]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:16 compute-0 sudo[284578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:48:16 compute-0 sudo[284578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:16 compute-0 sudo[284578]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:16 compute-0 sudo[284603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:48:16 compute-0 sudo[284603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:16 compute-0 sudo[284603]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:16 compute-0 sudo[284628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:48:16 compute-0 sudo[284628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:16 compute-0 sudo[284628]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:16 compute-0 sudo[284653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:48:16 compute-0 sudo[284653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:17 compute-0 podman[284719]: 2025-11-29 05:48:17.013349028 +0000 UTC m=+0.036860617 container create 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:48:17 compute-0 systemd[1]: Started libpod-conmon-73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4.scope.
Nov 29 05:48:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:48:17 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:48:17 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:48:17 compute-0 podman[284719]: 2025-11-29 05:48:17.081834024 +0000 UTC m=+0.105345643 container init 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 05:48:17 compute-0 podman[284719]: 2025-11-29 05:48:17.088848392 +0000 UTC m=+0.112359981 container start 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 05:48:17 compute-0 podman[284719]: 2025-11-29 05:48:17.092360786 +0000 UTC m=+0.115872375 container attach 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 05:48:17 compute-0 optimistic_rosalind[284735]: 167 167
Nov 29 05:48:17 compute-0 podman[284719]: 2025-11-29 05:48:16.997718583 +0000 UTC m=+0.021230182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:48:17 compute-0 systemd[1]: libpod-73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4.scope: Deactivated successfully.
Nov 29 05:48:17 compute-0 podman[284719]: 2025-11-29 05:48:17.093467523 +0000 UTC m=+0.116979122 container died 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 05:48:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f879c967adf5c21608aa88c0a76aae690e5bee9b0f72eb451030bb03a7fc4ef1-merged.mount: Deactivated successfully.
Nov 29 05:48:17 compute-0 podman[284719]: 2025-11-29 05:48:17.125641546 +0000 UTC m=+0.149153125 container remove 73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_rosalind, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:48:17 compute-0 systemd[1]: libpod-conmon-73258e5f7aaeb74fefaa4510a6646eb137204778349de76d57ab6365f4d8f8e4.scope: Deactivated successfully.
Nov 29 05:48:17 compute-0 podman[284761]: 2025-11-29 05:48:17.261478909 +0000 UTC m=+0.033715621 container create 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 05:48:17 compute-0 systemd[1]: Started libpod-conmon-2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12.scope.
Nov 29 05:48:17 compute-0 ceph-mon[75176]: pgmap v1347: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:17 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e9a26ee914780c6d172f740676b883327d31f12851f34bc7f21840d92cae9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e9a26ee914780c6d172f740676b883327d31f12851f34bc7f21840d92cae9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e9a26ee914780c6d172f740676b883327d31f12851f34bc7f21840d92cae9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e9a26ee914780c6d172f740676b883327d31f12851f34bc7f21840d92cae9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:48:17 compute-0 podman[284761]: 2025-11-29 05:48:17.246666233 +0000 UTC m=+0.018902965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:48:17 compute-0 podman[284761]: 2025-11-29 05:48:17.349136316 +0000 UTC m=+0.121373058 container init 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:48:17 compute-0 podman[284761]: 2025-11-29 05:48:17.362513186 +0000 UTC m=+0.134749898 container start 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:48:17 compute-0 podman[284761]: 2025-11-29 05:48:17.366163984 +0000 UTC m=+0.138400696 container attach 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:48:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]: {
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "osd_id": 0,
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "type": "bluestore"
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:     },
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "osd_id": 1,
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "type": "bluestore"
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:     },
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "osd_id": 2,
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:         "type": "bluestore"
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]:     }
Nov 29 05:48:18 compute-0 stoic_antonelli[284777]: }
Nov 29 05:48:18 compute-0 systemd[1]: libpod-2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12.scope: Deactivated successfully.
Nov 29 05:48:18 compute-0 podman[284761]: 2025-11-29 05:48:18.333218687 +0000 UTC m=+1.105455399 container died 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2e9a26ee914780c6d172f740676b883327d31f12851f34bc7f21840d92cae9a-merged.mount: Deactivated successfully.
Nov 29 05:48:18 compute-0 podman[284761]: 2025-11-29 05:48:18.407636755 +0000 UTC m=+1.179873507 container remove 2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_antonelli, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 05:48:18 compute-0 systemd[1]: libpod-conmon-2709cc663da0e9b17952175603823651263347c584a3777ccc6b492102566a12.scope: Deactivated successfully.
Nov 29 05:48:18 compute-0 sudo[284653]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:48:18 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:48:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:48:18 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:48:18 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 3eceb939-9325-4486-b7fa-89a5f4744b9e does not exist
Nov 29 05:48:18 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev f9384723-9722-42f8-a3e9-ea77441b9bb0 does not exist
Nov 29 05:48:18 compute-0 sudo[284821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:48:18 compute-0 sudo[284821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:18 compute-0 sudo[284821]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:18 compute-0 sudo[284846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:48:18 compute-0 sudo[284846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:48:18 compute-0 sudo[284846]: pam_unix(sudo:session): session closed for user root
Nov 29 05:48:19 compute-0 ceph-mon[75176]: pgmap v1348: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:48:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:48:20 compute-0 sshd-session[284871]: Invalid user odin from 152.32.145.111 port 44390
Nov 29 05:48:20 compute-0 sshd-session[284871]: Received disconnect from 152.32.145.111 port 44390:11: Bye Bye [preauth]
Nov 29 05:48:20 compute-0 sshd-session[284871]: Disconnected from invalid user odin 152.32.145.111 port 44390 [preauth]
Nov 29 05:48:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:21 compute-0 ceph-mon[75176]: pgmap v1349: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:23 compute-0 ceph-mon[75176]: pgmap v1350: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:24 compute-0 podman[284873]: 2025-11-29 05:48:24.018127412 +0000 UTC m=+0.065053754 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 05:48:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:24 compute-0 ceph-mon[75176]: pgmap v1351: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:26 compute-0 sshd-session[284892]: Invalid user bob from 154.221.27.234 port 53950
Nov 29 05:48:26 compute-0 ceph-mon[75176]: pgmap v1352: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:26 compute-0 sshd-session[284892]: Received disconnect from 154.221.27.234 port 53950:11: Bye Bye [preauth]
Nov 29 05:48:26 compute-0 sshd-session[284892]: Disconnected from invalid user bob 154.221.27.234 port 53950 [preauth]
Nov 29 05:48:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:28 compute-0 ceph-mon[75176]: pgmap v1353: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:30 compute-0 ceph-mon[75176]: pgmap v1354: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:32 compute-0 ceph-mon[75176]: pgmap v1355: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:33 compute-0 sshd-session[284894]: Received disconnect from 45.120.216.232 port 53726:11: Bye Bye [preauth]
Nov 29 05:48:33 compute-0 sshd-session[284894]: Disconnected from authenticating user root 45.120.216.232 port 53726 [preauth]
Nov 29 05:48:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:34 compute-0 ceph-mon[75176]: pgmap v1356: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:36 compute-0 ceph-mon[75176]: pgmap v1357: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:38 compute-0 ceph-mon[75176]: pgmap v1358: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:40 compute-0 podman[284896]: 2025-11-29 05:48:40.010362332 +0000 UTC m=+0.062796269 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 05:48:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:40 compute-0 ceph-mon[75176]: pgmap v1359: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:40 compute-0 nova_compute[254898]: 2025-11-29 05:48:40.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:48:41
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'default.rgw.control', '.mgr', '.rgw.root', 'backups']
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:48:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:48:41 compute-0 nova_compute[254898]: 2025-11-29 05:48:41.974 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:41 compute-0 nova_compute[254898]: 2025-11-29 05:48:41.986 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:48:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:48:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:42 compute-0 ceph-mon[75176]: pgmap v1360: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:43 compute-0 podman[284916]: 2025-11-29 05:48:43.04561844 +0000 UTC m=+0.104196014 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 05:48:43 compute-0 nova_compute[254898]: 2025-11-29 05:48:43.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:44 compute-0 ceph-mon[75176]: pgmap v1361: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:44 compute-0 nova_compute[254898]: 2025-11-29 05:48:44.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:44 compute-0 nova_compute[254898]: 2025-11-29 05:48:44.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:45 compute-0 nova_compute[254898]: 2025-11-29 05:48:45.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:45 compute-0 nova_compute[254898]: 2025-11-29 05:48:45.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:48:45 compute-0 nova_compute[254898]: 2025-11-29 05:48:45.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:45 compute-0 nova_compute[254898]: 2025-11-29 05:48:45.986 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:48:45 compute-0 nova_compute[254898]: 2025-11-29 05:48:45.986 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:48:45 compute-0 nova_compute[254898]: 2025-11-29 05:48:45.986 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:48:45 compute-0 nova_compute[254898]: 2025-11-29 05:48:45.986 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:48:45 compute-0 nova_compute[254898]: 2025-11-29 05:48:45.987 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:48:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:46 compute-0 ceph-mon[75176]: pgmap v1362: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:46 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:48:46 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/136831747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:48:46 compute-0 nova_compute[254898]: 2025-11-29 05:48:46.420 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:48:46 compute-0 nova_compute[254898]: 2025-11-29 05:48:46.565 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:48:46 compute-0 nova_compute[254898]: 2025-11-29 05:48:46.566 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4980MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:48:46 compute-0 nova_compute[254898]: 2025-11-29 05:48:46.567 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:48:46 compute-0 nova_compute[254898]: 2025-11-29 05:48:46.567 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:48:46 compute-0 nova_compute[254898]: 2025-11-29 05:48:46.925 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:48:46 compute-0 nova_compute[254898]: 2025-11-29 05:48:46.925 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.014 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing inventories for resource provider 59594bc8-0143-475b-913f-cbe106b48966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.119 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating ProviderTree inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.120 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.131 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing aggregate associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.153 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing trait associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.166 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:48:47 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/136831747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:48:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:48:47 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3457357074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.556 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.562 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.577 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.578 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.578 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:48:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:47 compute-0 nova_compute[254898]: 2025-11-29 05:48:47.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 05:48:48 compute-0 nova_compute[254898]: 2025-11-29 05:48:48.268 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:48 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3457357074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:48:48 compute-0 ceph-mon[75176]: pgmap v1363: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:50 compute-0 ceph-mon[75176]: pgmap v1364: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:50 compute-0 nova_compute[254898]: 2025-11-29 05:48:50.972 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:50 compute-0 nova_compute[254898]: 2025-11-29 05:48:50.972 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:50 compute-0 nova_compute[254898]: 2025-11-29 05:48:50.973 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:48:50 compute-0 nova_compute[254898]: 2025-11-29 05:48:50.973 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:48:50 compute-0 nova_compute[254898]: 2025-11-29 05:48:50.989 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:48:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:48:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:52 compute-0 ceph-mon[75176]: pgmap v1365: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:52 compute-0 nova_compute[254898]: 2025-11-29 05:48:52.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:48:52 compute-0 nova_compute[254898]: 2025-11-29 05:48:52.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 05:48:52 compute-0 nova_compute[254898]: 2025-11-29 05:48:52.978 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 05:48:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:54 compute-0 ceph-mon[75176]: pgmap v1366: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:55 compute-0 podman[284986]: 2025-11-29 05:48:55.008397678 +0000 UTC m=+0.054239763 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 05:48:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:56 compute-0 ceph-mon[75176]: pgmap v1367: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:48:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:48:58 compute-0 ceph-mon[75176]: pgmap v1368: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:00 compute-0 ceph-mon[75176]: pgmap v1369: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:02 compute-0 ceph-mon[75176]: pgmap v1370: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:04 compute-0 ceph-mon[75176]: pgmap v1371: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.399390) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344399502, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2408, "num_deletes": 507, "total_data_size": 3489987, "memory_usage": 3548544, "flush_reason": "Manual Compaction"}
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344428389, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3433848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28629, "largest_seqno": 31036, "table_properties": {"data_size": 3423144, "index_size": 6366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25736, "raw_average_key_size": 19, "raw_value_size": 3399380, "raw_average_value_size": 2625, "num_data_blocks": 281, "num_entries": 1295, "num_filter_entries": 1295, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764395117, "oldest_key_time": 1764395117, "file_creation_time": 1764395344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 29049 microseconds, and 12452 cpu microseconds.
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.428448) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3433848 bytes OK
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.428471) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.433910) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.433923) EVENT_LOG_v1 {"time_micros": 1764395344433919, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.433940) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3478648, prev total WAL file size 3478648, number of live WAL files 2.
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.434805) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3353KB)], [62(8607KB)]
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344434837, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12248009, "oldest_snapshot_seqno": -1}
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6118 keys, 10410175 bytes, temperature: kUnknown
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344502373, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10410175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10366875, "index_size": 26934, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 154209, "raw_average_key_size": 25, "raw_value_size": 10254792, "raw_average_value_size": 1676, "num_data_blocks": 1100, "num_entries": 6118, "num_filter_entries": 6118, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764395344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.502594) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10410175 bytes
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.504506) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.2 rd, 154.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.6) write-amplify(3.0) OK, records in: 7148, records dropped: 1030 output_compression: NoCompression
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.504521) EVENT_LOG_v1 {"time_micros": 1764395344504514, "job": 34, "event": "compaction_finished", "compaction_time_micros": 67604, "compaction_time_cpu_micros": 20410, "output_level": 6, "num_output_files": 1, "total_output_size": 10410175, "num_input_records": 7148, "num_output_records": 6118, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344505122, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395344506625, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.434709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.506691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.506696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.506698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.506700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:49:04 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:49:04.506702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:49:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:06 compute-0 ceph-mon[75176]: pgmap v1372: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:08 compute-0 ceph-mon[75176]: pgmap v1373: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:10 compute-0 ceph-mon[75176]: pgmap v1374: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:11 compute-0 podman[285006]: 2025-11-29 05:49:11.009100843 +0000 UTC m=+0.054221102 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 05:49:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:49:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:49:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:49:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:49:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:49:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:49:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:12 compute-0 ceph-mon[75176]: pgmap v1375: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:49:13.764 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:49:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:49:13.764 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:49:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:49:13.765 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:49:14 compute-0 podman[285026]: 2025-11-29 05:49:14.053376592 +0000 UTC m=+0.104604094 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 05:49:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:49:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3874808858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:49:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:49:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3874808858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:49:14 compute-0 ceph-mon[75176]: pgmap v1376: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3874808858' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:49:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3874808858' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:49:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:16 compute-0 ceph-mon[75176]: pgmap v1377: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:18 compute-0 ceph-mon[75176]: pgmap v1378: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:18 compute-0 sudo[285054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:49:18 compute-0 sudo[285054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:18 compute-0 sudo[285054]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:18 compute-0 sudo[285079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:49:18 compute-0 sudo[285079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:18 compute-0 sudo[285079]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:18 compute-0 sudo[285104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:49:18 compute-0 sudo[285104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:18 compute-0 sudo[285104]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:18 compute-0 sudo[285129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:49:18 compute-0 sudo[285129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:18 compute-0 sshd-session[285052]: Received disconnect from 45.249.245.22 port 51678:11: Bye Bye [preauth]
Nov 29 05:49:18 compute-0 sshd-session[285052]: Disconnected from authenticating user root 45.249.245.22 port 51678 [preauth]
Nov 29 05:49:19 compute-0 nova_compute[254898]: 2025-11-29 05:49:19.244 254902 DEBUG oslo_concurrency.processutils [None req-677f7328-0038-4310-a215-6b6c196af2d2 da42e74ed6d04223b9f1be411e89508b 389b14b74e3c4a1184dca228ba013067 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:49:19 compute-0 nova_compute[254898]: 2025-11-29 05:49:19.274 254902 DEBUG oslo_concurrency.processutils [None req-677f7328-0038-4310-a215-6b6c196af2d2 da42e74ed6d04223b9f1be411e89508b 389b14b74e3c4a1184dca228ba013067 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:49:19 compute-0 sudo[285129]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:49:19 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:49:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:49:19 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:49:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:49:19 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:49:19 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 08dfde94-1a41-4539-b869-df98d45e93bc does not exist
Nov 29 05:49:19 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 0bb3d3a0-393a-4b64-bf45-4b4d68c95067 does not exist
Nov 29 05:49:19 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev b7d84775-41a8-418e-993f-b4df089a24cf does not exist
Nov 29 05:49:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:49:19 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:49:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:49:19 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:49:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:49:19 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:49:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:49:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:49:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:49:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:49:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:49:19 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:49:19 compute-0 sudo[285186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:49:19 compute-0 sudo[285186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:19 compute-0 sudo[285186]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:19 compute-0 sudo[285211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:49:19 compute-0 sudo[285211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:19 compute-0 sudo[285211]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:19 compute-0 sudo[285236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:49:19 compute-0 sudo[285236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:19 compute-0 sudo[285236]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:19 compute-0 sudo[285261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:49:19 compute-0 sudo[285261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:19 compute-0 podman[285326]: 2025-11-29 05:49:19.912142234 +0000 UTC m=+0.039919360 container create 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 05:49:19 compute-0 systemd[1]: Started libpod-conmon-8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25.scope.
Nov 29 05:49:19 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:49:19 compute-0 podman[285326]: 2025-11-29 05:49:19.894564192 +0000 UTC m=+0.022341348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:49:20 compute-0 podman[285326]: 2025-11-29 05:49:20.001647835 +0000 UTC m=+0.129424991 container init 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:49:20 compute-0 podman[285326]: 2025-11-29 05:49:20.007943425 +0000 UTC m=+0.135720541 container start 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:49:20 compute-0 podman[285326]: 2025-11-29 05:49:20.01101706 +0000 UTC m=+0.138794216 container attach 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:49:20 compute-0 determined_nightingale[285342]: 167 167
Nov 29 05:49:20 compute-0 systemd[1]: libpod-8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25.scope: Deactivated successfully.
Nov 29 05:49:20 compute-0 podman[285326]: 2025-11-29 05:49:20.015546619 +0000 UTC m=+0.143323745 container died 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 05:49:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b236aead69a0c62144312497722f29f7a3ed6ba32e34b74efaf9220ef53ba3fc-merged.mount: Deactivated successfully.
Nov 29 05:49:20 compute-0 podman[285326]: 2025-11-29 05:49:20.058666945 +0000 UTC m=+0.186444071 container remove 8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 05:49:20 compute-0 systemd[1]: libpod-conmon-8419b549d9c3d78f7e785dc1b77389e6a2a5130fcdcac4db13b49e20655bfa25.scope: Deactivated successfully.
Nov 29 05:49:20 compute-0 podman[285368]: 2025-11-29 05:49:20.220629195 +0000 UTC m=+0.043885195 container create 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:49:20 compute-0 systemd[1]: Started libpod-conmon-255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391.scope.
Nov 29 05:49:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:20 compute-0 podman[285368]: 2025-11-29 05:49:20.201867665 +0000 UTC m=+0.025123655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:49:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:20 compute-0 podman[285368]: 2025-11-29 05:49:20.306715054 +0000 UTC m=+0.129971044 container init 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:49:20 compute-0 podman[285368]: 2025-11-29 05:49:20.315096765 +0000 UTC m=+0.138352735 container start 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:49:20 compute-0 podman[285368]: 2025-11-29 05:49:20.317530103 +0000 UTC m=+0.140786103 container attach 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:49:20 compute-0 ceph-mon[75176]: pgmap v1379: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:21 compute-0 compassionate_shockley[285385]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:49:21 compute-0 compassionate_shockley[285385]: --> relative data size: 1.0
Nov 29 05:49:21 compute-0 compassionate_shockley[285385]: --> All data devices are unavailable
Nov 29 05:49:21 compute-0 systemd[1]: libpod-255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391.scope: Deactivated successfully.
Nov 29 05:49:21 compute-0 podman[285368]: 2025-11-29 05:49:21.325858008 +0000 UTC m=+1.149113988 container died 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:49:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-66c33cce2bf393ae58431544e5c1bc84005ce1cd8d94988cfeb0bcaa27641095-merged.mount: Deactivated successfully.
Nov 29 05:49:21 compute-0 podman[285368]: 2025-11-29 05:49:21.374579728 +0000 UTC m=+1.197835698 container remove 255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:49:21 compute-0 systemd[1]: libpod-conmon-255a7f315b886829eccf22b6cebc4e0a553c4d659db93cef62e16c6367f88391.scope: Deactivated successfully.
Nov 29 05:49:21 compute-0 sudo[285261]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:21 compute-0 sudo[285426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:49:21 compute-0 sudo[285426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:21 compute-0 sudo[285426]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:21 compute-0 sudo[285451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:49:21 compute-0 sudo[285451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:21 compute-0 sudo[285451]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:21 compute-0 sudo[285476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:49:21 compute-0 sudo[285476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:21 compute-0 sudo[285476]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:21 compute-0 sudo[285501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:49:21 compute-0 sudo[285501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:22 compute-0 podman[285568]: 2025-11-29 05:49:22.084959405 +0000 UTC m=+0.039223883 container create 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:49:22 compute-0 systemd[1]: Started libpod-conmon-287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4.scope.
Nov 29 05:49:22 compute-0 podman[285568]: 2025-11-29 05:49:22.068469599 +0000 UTC m=+0.022734087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:49:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:49:22 compute-0 podman[285568]: 2025-11-29 05:49:22.201768851 +0000 UTC m=+0.156033339 container init 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 05:49:22 compute-0 podman[285568]: 2025-11-29 05:49:22.209962878 +0000 UTC m=+0.164227346 container start 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:49:22 compute-0 podman[285568]: 2025-11-29 05:49:22.212947019 +0000 UTC m=+0.167211577 container attach 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 29 05:49:22 compute-0 admiring_dijkstra[285585]: 167 167
Nov 29 05:49:22 compute-0 systemd[1]: libpod-287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4.scope: Deactivated successfully.
Nov 29 05:49:22 compute-0 podman[285568]: 2025-11-29 05:49:22.215947931 +0000 UTC m=+0.170212409 container died 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:49:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf7e365b10ee122af41c62c1916df318057e991a7834c3c12a4ca72789422882-merged.mount: Deactivated successfully.
Nov 29 05:49:22 compute-0 podman[285568]: 2025-11-29 05:49:22.24837504 +0000 UTC m=+0.202639508 container remove 287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 05:49:22 compute-0 systemd[1]: libpod-conmon-287ff5b36df50eb0522d1a067c57b80fc41a37985a88abeacfef847972f51ad4.scope: Deactivated successfully.
Nov 29 05:49:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:22 compute-0 ceph-mon[75176]: pgmap v1380: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:22 compute-0 podman[285609]: 2025-11-29 05:49:22.412522844 +0000 UTC m=+0.044864458 container create f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 05:49:22 compute-0 systemd[1]: Started libpod-conmon-f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198.scope.
Nov 29 05:49:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d43a43225ca9b1a23233cd9798e2e8397c7063b159b6a5bfbbea8a4e1b5a57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d43a43225ca9b1a23233cd9798e2e8397c7063b159b6a5bfbbea8a4e1b5a57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d43a43225ca9b1a23233cd9798e2e8397c7063b159b6a5bfbbea8a4e1b5a57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78d43a43225ca9b1a23233cd9798e2e8397c7063b159b6a5bfbbea8a4e1b5a57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:22 compute-0 podman[285609]: 2025-11-29 05:49:22.39151563 +0000 UTC m=+0.023857294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:49:22 compute-0 podman[285609]: 2025-11-29 05:49:22.495897337 +0000 UTC m=+0.128238961 container init f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:49:22 compute-0 podman[285609]: 2025-11-29 05:49:22.504616697 +0000 UTC m=+0.136958311 container start f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:49:22 compute-0 podman[285609]: 2025-11-29 05:49:22.507565138 +0000 UTC m=+0.139906772 container attach f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 05:49:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:23 compute-0 awesome_jones[285626]: {
Nov 29 05:49:23 compute-0 awesome_jones[285626]:     "0": [
Nov 29 05:49:23 compute-0 awesome_jones[285626]:         {
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "devices": [
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "/dev/loop3"
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             ],
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_name": "ceph_lv0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_size": "21470642176",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "name": "ceph_lv0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "tags": {
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.cluster_name": "ceph",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.crush_device_class": "",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.encrypted": "0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.osd_id": "0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.type": "block",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.vdo": "0"
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             },
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "type": "block",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "vg_name": "ceph_vg0"
Nov 29 05:49:23 compute-0 awesome_jones[285626]:         }
Nov 29 05:49:23 compute-0 awesome_jones[285626]:     ],
Nov 29 05:49:23 compute-0 awesome_jones[285626]:     "1": [
Nov 29 05:49:23 compute-0 awesome_jones[285626]:         {
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "devices": [
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "/dev/loop4"
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             ],
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_name": "ceph_lv1",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_size": "21470642176",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "name": "ceph_lv1",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "tags": {
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.cluster_name": "ceph",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.crush_device_class": "",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.encrypted": "0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.osd_id": "1",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.type": "block",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.vdo": "0"
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             },
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "type": "block",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "vg_name": "ceph_vg1"
Nov 29 05:49:23 compute-0 awesome_jones[285626]:         }
Nov 29 05:49:23 compute-0 awesome_jones[285626]:     ],
Nov 29 05:49:23 compute-0 awesome_jones[285626]:     "2": [
Nov 29 05:49:23 compute-0 awesome_jones[285626]:         {
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "devices": [
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "/dev/loop5"
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             ],
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_name": "ceph_lv2",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_size": "21470642176",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "name": "ceph_lv2",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "tags": {
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.cluster_name": "ceph",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.crush_device_class": "",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.encrypted": "0",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.osd_id": "2",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.type": "block",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:                 "ceph.vdo": "0"
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             },
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "type": "block",
Nov 29 05:49:23 compute-0 awesome_jones[285626]:             "vg_name": "ceph_vg2"
Nov 29 05:49:23 compute-0 awesome_jones[285626]:         }
Nov 29 05:49:23 compute-0 awesome_jones[285626]:     ]
Nov 29 05:49:23 compute-0 awesome_jones[285626]: }
Nov 29 05:49:23 compute-0 systemd[1]: libpod-f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198.scope: Deactivated successfully.
Nov 29 05:49:23 compute-0 podman[285609]: 2025-11-29 05:49:23.329143345 +0000 UTC m=+0.961484969 container died f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:49:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-78d43a43225ca9b1a23233cd9798e2e8397c7063b159b6a5bfbbea8a4e1b5a57-merged.mount: Deactivated successfully.
Nov 29 05:49:23 compute-0 podman[285609]: 2025-11-29 05:49:23.378601264 +0000 UTC m=+1.010942878 container remove f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:49:23 compute-0 systemd[1]: libpod-conmon-f366af29c78dc3651be4f40d80a57c07705acbf1a292b7869bb6d9d535885198.scope: Deactivated successfully.
Nov 29 05:49:23 compute-0 sudo[285501]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:23 compute-0 sudo[285649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:49:23 compute-0 sudo[285649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:23 compute-0 sudo[285649]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:23 compute-0 sudo[285674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:49:23 compute-0 sudo[285674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:23 compute-0 sudo[285674]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:23 compute-0 sudo[285699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:49:23 compute-0 sudo[285699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:23 compute-0 sudo[285699]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:23 compute-0 sudo[285724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:49:23 compute-0 sudo[285724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:24 compute-0 podman[285790]: 2025-11-29 05:49:24.042020572 +0000 UTC m=+0.046300654 container create d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 05:49:24 compute-0 systemd[1]: Started libpod-conmon-d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8.scope.
Nov 29 05:49:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:49:24 compute-0 podman[285790]: 2025-11-29 05:49:24.024647924 +0000 UTC m=+0.028928056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:49:24 compute-0 podman[285790]: 2025-11-29 05:49:24.125891906 +0000 UTC m=+0.130172028 container init d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:49:24 compute-0 podman[285790]: 2025-11-29 05:49:24.138327135 +0000 UTC m=+0.142607227 container start d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:49:24 compute-0 podman[285790]: 2025-11-29 05:49:24.141197064 +0000 UTC m=+0.145477246 container attach d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 29 05:49:24 compute-0 modest_goldwasser[285806]: 167 167
Nov 29 05:49:24 compute-0 systemd[1]: libpod-d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8.scope: Deactivated successfully.
Nov 29 05:49:24 compute-0 podman[285790]: 2025-11-29 05:49:24.145156729 +0000 UTC m=+0.149436821 container died d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:49:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7be8ef5d4becb8e7b4483acb0a787332c8a3bac46dbba8163cc35afb5a366ce-merged.mount: Deactivated successfully.
Nov 29 05:49:24 compute-0 podman[285790]: 2025-11-29 05:49:24.186067262 +0000 UTC m=+0.190347344 container remove d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:49:24 compute-0 systemd[1]: libpod-conmon-d2873863f6a6464931dfc2bddeabde38d4e2ba1bf90aad1c0fb7ddc856db5ea8.scope: Deactivated successfully.
Nov 29 05:49:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:24 compute-0 podman[285828]: 2025-11-29 05:49:24.349865177 +0000 UTC m=+0.046198851 container create 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:49:24 compute-0 systemd[1]: Started libpod-conmon-84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41.scope.
Nov 29 05:49:24 compute-0 ceph-mon[75176]: pgmap v1381: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:24 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93f70ab2d2ab4451b0e188928c2ad60662541b1c34c52b99d574754db85d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93f70ab2d2ab4451b0e188928c2ad60662541b1c34c52b99d574754db85d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93f70ab2d2ab4451b0e188928c2ad60662541b1c34c52b99d574754db85d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c93f70ab2d2ab4451b0e188928c2ad60662541b1c34c52b99d574754db85d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:49:24 compute-0 podman[285828]: 2025-11-29 05:49:24.328636817 +0000 UTC m=+0.024970511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:49:24 compute-0 podman[285828]: 2025-11-29 05:49:24.433384104 +0000 UTC m=+0.129717798 container init 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:49:24 compute-0 podman[285828]: 2025-11-29 05:49:24.440871223 +0000 UTC m=+0.137204897 container start 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 05:49:24 compute-0 podman[285828]: 2025-11-29 05:49:24.44444575 +0000 UTC m=+0.140779424 container attach 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:49:25 compute-0 naughty_wilson[285844]: {
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "osd_id": 0,
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "type": "bluestore"
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:     },
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "osd_id": 1,
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "type": "bluestore"
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:     },
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "osd_id": 2,
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:         "type": "bluestore"
Nov 29 05:49:25 compute-0 naughty_wilson[285844]:     }
Nov 29 05:49:25 compute-0 naughty_wilson[285844]: }
Nov 29 05:49:25 compute-0 systemd[1]: libpod-84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41.scope: Deactivated successfully.
Nov 29 05:49:25 compute-0 podman[285828]: 2025-11-29 05:49:25.431177826 +0000 UTC m=+1.127511500 container died 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:49:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-29c93f70ab2d2ab4451b0e188928c2ad60662541b1c34c52b99d574754db85d7-merged.mount: Deactivated successfully.
Nov 29 05:49:25 compute-0 podman[285828]: 2025-11-29 05:49:25.489050425 +0000 UTC m=+1.185384099 container remove 84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:49:25 compute-0 systemd[1]: libpod-conmon-84ba2e778dd461db42f4ae139ac8524d51af52fed08bc9b943a94e6e91fd4e41.scope: Deactivated successfully.
Nov 29 05:49:25 compute-0 sudo[285724]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:49:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:49:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:49:25 compute-0 podman[285878]: 2025-11-29 05:49:25.538428702 +0000 UTC m=+0.067402961 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:49:25 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:49:25 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 84ae6f4e-32a9-4929-8ffd-7bf327f3c1e3 does not exist
Nov 29 05:49:25 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev fcc4f5ba-84f0-450e-9ba7-5445fc954b5f does not exist
Nov 29 05:49:25 compute-0 sudo[285909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:49:25 compute-0 sudo[285909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:25 compute-0 sudo[285909]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:25 compute-0 sudo[285934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:49:25 compute-0 sudo[285934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:49:25 compute-0 sudo[285934]: pam_unix(sudo:session): session closed for user root
Nov 29 05:49:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:26 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:49:26 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:49:26 compute-0 ceph-mon[75176]: pgmap v1382: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:27 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:49:27.228 163973 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:57:69', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9a:e7:3b:9e:3e:09'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 29 05:49:27 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:49:27.229 163973 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 29 05:49:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:28 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:49:28.231 163973 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=63cfe9d2-e938-418d-9401-5d1a600b4ede, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 29 05:49:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:28 compute-0 ceph-mon[75176]: pgmap v1383: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:30 compute-0 ceph-mon[75176]: pgmap v1384: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:32 compute-0 ceph-mon[75176]: pgmap v1385: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:34 compute-0 ceph-mon[75176]: pgmap v1386: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:36 compute-0 ceph-mon[75176]: pgmap v1387: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:49:37 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9347 writes, 33K keys, 9347 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9347 writes, 2355 syncs, 3.97 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2116 writes, 5839 keys, 2116 commit groups, 1.0 writes per commit group, ingest: 7.88 MB, 0.01 MB/s
                                           Interval WAL: 2116 writes, 782 syncs, 2.71 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:49:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:38 compute-0 ceph-mon[75176]: pgmap v1388: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:40 compute-0 ceph-mon[75176]: pgmap v1389: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:49:41
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'images']
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:49:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:49:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:49:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:49:42 compute-0 podman[285959]: 2025-11-29 05:49:42.037842656 +0000 UTC m=+0.091642022 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:49:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:42 compute-0 ceph-mon[75176]: pgmap v1390: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:49:42 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 14K writes, 52K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4177 syncs, 3.37 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3169 writes, 9859 keys, 3169 commit groups, 1.0 writes per commit group, ingest: 13.28 MB, 0.02 MB/s
                                           Interval WAL: 3169 writes, 1178 syncs, 2.69 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:49:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:42 compute-0 nova_compute[254898]: 2025-11-29 05:49:42.978 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:49:43 compute-0 nova_compute[254898]: 2025-11-29 05:49:43.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:49:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:44 compute-0 ceph-mon[75176]: pgmap v1391: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:45 compute-0 podman[285979]: 2025-11-29 05:49:45.073177847 +0000 UTC m=+0.114764078 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 05:49:45 compute-0 nova_compute[254898]: 2025-11-29 05:49:45.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:49:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:46 compute-0 ceph-mon[75176]: pgmap v1392: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:46 compute-0 nova_compute[254898]: 2025-11-29 05:49:46.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:49:46 compute-0 nova_compute[254898]: 2025-11-29 05:49:46.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:49:46 compute-0 nova_compute[254898]: 2025-11-29 05:49:46.955 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:49:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:49:47 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9735 writes, 34K keys, 9735 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9735 writes, 2412 syncs, 4.04 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1751 writes, 3893 keys, 1751 commit groups, 1.0 writes per commit group, ingest: 1.60 MB, 0.00 MB/s
                                           Interval WAL: 1751 writes, 547 syncs, 3.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:49:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:47 compute-0 nova_compute[254898]: 2025-11-29 05:49:47.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:49:47 compute-0 nova_compute[254898]: 2025-11-29 05:49:47.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:49:47 compute-0 nova_compute[254898]: 2025-11-29 05:49:47.983 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:49:47 compute-0 nova_compute[254898]: 2025-11-29 05:49:47.983 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:49:47 compute-0 nova_compute[254898]: 2025-11-29 05:49:47.984 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:49:47 compute-0 nova_compute[254898]: 2025-11-29 05:49:47.984 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:49:47 compute-0 nova_compute[254898]: 2025-11-29 05:49:47.984 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:49:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:48 compute-0 ceph-mon[75176]: pgmap v1393: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:49:48 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2149476587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:49:48 compute-0 nova_compute[254898]: 2025-11-29 05:49:48.456 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:49:48 compute-0 nova_compute[254898]: 2025-11-29 05:49:48.620 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:49:48 compute-0 nova_compute[254898]: 2025-11-29 05:49:48.621 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4959MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:49:48 compute-0 nova_compute[254898]: 2025-11-29 05:49:48.621 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:49:48 compute-0 nova_compute[254898]: 2025-11-29 05:49:48.622 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:49:48 compute-0 nova_compute[254898]: 2025-11-29 05:49:48.691 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:49:48 compute-0 nova_compute[254898]: 2025-11-29 05:49:48.691 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:49:48 compute-0 nova_compute[254898]: 2025-11-29 05:49:48.712 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:49:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:49:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2226711259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:49:49 compute-0 nova_compute[254898]: 2025-11-29 05:49:49.092 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:49:49 compute-0 nova_compute[254898]: 2025-11-29 05:49:49.098 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:49:49 compute-0 nova_compute[254898]: 2025-11-29 05:49:49.116 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:49:49 compute-0 nova_compute[254898]: 2025-11-29 05:49:49.119 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:49:49 compute-0 nova_compute[254898]: 2025-11-29 05:49:49.119 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:49:49 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2149476587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:49:49 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2226711259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:49:49 compute-0 sshd-session[286005]: Received disconnect from 45.78.217.106 port 40340:11: Bye Bye [preauth]
Nov 29 05:49:49 compute-0 sshd-session[286005]: Disconnected from authenticating user root 45.78.217.106 port 40340 [preauth]
Nov 29 05:49:50 compute-0 ceph-mgr[75473]: [devicehealth INFO root] Check health
Nov 29 05:49:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:50 compute-0 ceph-mon[75176]: pgmap v1394: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:50 compute-0 sshd-session[286051]: Received disconnect from 45.120.216.232 port 52628:11: Bye Bye [preauth]
Nov 29 05:49:50 compute-0 sshd-session[286051]: Disconnected from authenticating user root 45.120.216.232 port 52628 [preauth]
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:49:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:49:52 compute-0 nova_compute[254898]: 2025-11-29 05:49:52.120 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:49:52 compute-0 nova_compute[254898]: 2025-11-29 05:49:52.120 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:49:52 compute-0 nova_compute[254898]: 2025-11-29 05:49:52.121 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:49:52 compute-0 nova_compute[254898]: 2025-11-29 05:49:52.134 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:49:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:52 compute-0 ceph-mon[75176]: pgmap v1395: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:52 compute-0 nova_compute[254898]: 2025-11-29 05:49:52.963 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:49:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:54 compute-0 ceph-mon[75176]: pgmap v1396: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:55 compute-0 podman[286053]: 2025-11-29 05:49:55.994927007 +0000 UTC m=+0.046185030 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 29 05:49:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:56 compute-0 ceph-mon[75176]: pgmap v1397: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:49:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:58 compute-0 ceph-mon[75176]: pgmap v1398: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:49:59 compute-0 sshd-session[286073]: Invalid user andy from 152.32.145.111 port 43420
Nov 29 05:49:59 compute-0 sshd-session[286073]: Received disconnect from 152.32.145.111 port 43420:11: Bye Bye [preauth]
Nov 29 05:49:59 compute-0 sshd-session[286073]: Disconnected from invalid user andy 152.32.145.111 port 43420 [preauth]
Nov 29 05:50:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:00 compute-0 ceph-mon[75176]: pgmap v1399: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:02 compute-0 sshd-session[286075]: Invalid user roott from 192.161.60.110 port 56524
Nov 29 05:50:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:02 compute-0 sshd-session[286075]: Received disconnect from 192.161.60.110 port 56524:11: Bye Bye [preauth]
Nov 29 05:50:02 compute-0 sshd-session[286075]: Disconnected from invalid user roott 192.161.60.110 port 56524 [preauth]
Nov 29 05:50:02 compute-0 ceph-mon[75176]: pgmap v1400: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:04 compute-0 ceph-mon[75176]: pgmap v1401: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:06 compute-0 ceph-mon[75176]: pgmap v1402: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:08 compute-0 ceph-mon[75176]: pgmap v1403: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:10 compute-0 ceph-mon[75176]: pgmap v1404: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:50:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:50:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:50:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:50:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:50:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:50:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:12 compute-0 ceph-mon[75176]: pgmap v1405: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:12 compute-0 podman[286077]: 2025-11-29 05:50:12.999017478 +0000 UTC m=+0.050014142 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:50:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:50:13.764 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:50:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:50:13.765 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:50:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:50:13.765 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:50:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:50:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/528576601' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:50:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:50:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/528576601' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:50:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/528576601' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:50:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/528576601' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:50:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:15 compute-0 ceph-mon[75176]: pgmap v1406: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:16 compute-0 podman[286098]: 2025-11-29 05:50:16.020566528 +0000 UTC m=+0.079819858 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:50:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:16 compute-0 ceph-mon[75176]: pgmap v1407: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:18 compute-0 ceph-mon[75176]: pgmap v1408: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:20 compute-0 ceph-mon[75176]: pgmap v1409: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:22 compute-0 ceph-mon[75176]: pgmap v1410: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:24 compute-0 ceph-mon[75176]: pgmap v1411: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:25 compute-0 sudo[286126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:50:25 compute-0 sudo[286126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:25 compute-0 sudo[286126]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:25 compute-0 sudo[286151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:50:25 compute-0 sudo[286151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:25 compute-0 sudo[286151]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:25 compute-0 sudo[286176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:50:25 compute-0 sudo[286176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:25 compute-0 sudo[286176]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:26 compute-0 sudo[286201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 29 05:50:26 compute-0 sudo[286201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:26 compute-0 podman[286225]: 2025-11-29 05:50:26.119122831 +0000 UTC m=+0.072794680 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 05:50:26 compute-0 sudo[286201]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:50:26 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:50:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:50:26 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:50:26 compute-0 sudo[286264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:50:26 compute-0 sudo[286264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:26 compute-0 sudo[286264]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:26 compute-0 sudo[286289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:50:26 compute-0 sudo[286289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:26 compute-0 sudo[286289]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:26 compute-0 sudo[286314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:50:26 compute-0 sudo[286314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:26 compute-0 sudo[286314]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:26 compute-0 sudo[286339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:50:26 compute-0 sudo[286339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:26 compute-0 sudo[286339]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:50:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:50:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:50:26 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:50:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:50:26 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:50:26 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d4be9e86-cf92-480e-970d-fcb04b55df85 does not exist
Nov 29 05:50:26 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 22ec78fc-1a0d-4012-9996-64993acdc1b7 does not exist
Nov 29 05:50:26 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev abba0dab-be50-4429-9961-314a4223ae0e does not exist
Nov 29 05:50:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:50:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:50:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:50:26 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:50:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:50:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:50:27 compute-0 sudo[286395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:50:27 compute-0 sudo[286395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:27 compute-0 sudo[286395]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:27 compute-0 sudo[286420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:50:27 compute-0 sudo[286420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:27 compute-0 sudo[286420]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:27 compute-0 sudo[286445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:50:27 compute-0 sudo[286445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:27 compute-0 sudo[286445]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:27 compute-0 sudo[286470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:50:27 compute-0 sudo[286470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:50:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:50:27 compute-0 ceph-mon[75176]: pgmap v1412: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:50:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:50:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:50:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:50:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:50:27 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:50:27 compute-0 podman[286535]: 2025-11-29 05:50:27.65545187 +0000 UTC m=+0.039067990 container create 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 05:50:27 compute-0 systemd[1]: Started libpod-conmon-782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a.scope.
Nov 29 05:50:27 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:50:27 compute-0 podman[286535]: 2025-11-29 05:50:27.722371058 +0000 UTC m=+0.105987198 container init 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 05:50:27 compute-0 podman[286535]: 2025-11-29 05:50:27.72954017 +0000 UTC m=+0.113156300 container start 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:50:27 compute-0 podman[286535]: 2025-11-29 05:50:27.73247273 +0000 UTC m=+0.116088860 container attach 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 29 05:50:27 compute-0 podman[286535]: 2025-11-29 05:50:27.638997654 +0000 UTC m=+0.022613794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:50:27 compute-0 mystifying_bassi[286552]: 167 167
Nov 29 05:50:27 compute-0 systemd[1]: libpod-782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a.scope: Deactivated successfully.
Nov 29 05:50:27 compute-0 podman[286535]: 2025-11-29 05:50:27.738253049 +0000 UTC m=+0.121869179 container died 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:50:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f661d02cbc62cd57990905657dc82820f425628d7896a5f6d3d6203c7cee0947-merged.mount: Deactivated successfully.
Nov 29 05:50:27 compute-0 podman[286535]: 2025-11-29 05:50:27.76949045 +0000 UTC m=+0.153106580 container remove 782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:50:27 compute-0 systemd[1]: libpod-conmon-782dc5fb78455b27ff2689b42ffa8508a3f574b9130afe0c81278603c092f61a.scope: Deactivated successfully.
Nov 29 05:50:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:27 compute-0 podman[286576]: 2025-11-29 05:50:27.942891615 +0000 UTC m=+0.046467687 container create 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:50:27 compute-0 systemd[1]: Started libpod-conmon-2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941.scope.
Nov 29 05:50:28 compute-0 podman[286576]: 2025-11-29 05:50:27.920704633 +0000 UTC m=+0.024280755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:50:28 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:28 compute-0 podman[286576]: 2025-11-29 05:50:28.038889902 +0000 UTC m=+0.142465984 container init 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:50:28 compute-0 podman[286576]: 2025-11-29 05:50:28.045979262 +0000 UTC m=+0.149555364 container start 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 29 05:50:28 compute-0 podman[286576]: 2025-11-29 05:50:28.049769173 +0000 UTC m=+0.153345255 container attach 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:50:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:28 compute-0 ceph-mon[75176]: pgmap v1413: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:29 compute-0 peaceful_noether[286593]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:50:29 compute-0 peaceful_noether[286593]: --> relative data size: 1.0
Nov 29 05:50:29 compute-0 peaceful_noether[286593]: --> All data devices are unavailable
Nov 29 05:50:29 compute-0 systemd[1]: libpod-2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941.scope: Deactivated successfully.
Nov 29 05:50:29 compute-0 podman[286576]: 2025-11-29 05:50:29.060356292 +0000 UTC m=+1.163932384 container died 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:50:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-91badc9c60c7951d58a60979aa65fbbe10e0dbbf385ca7ef43177200dfb65e1a-merged.mount: Deactivated successfully.
Nov 29 05:50:29 compute-0 podman[286576]: 2025-11-29 05:50:29.107515035 +0000 UTC m=+1.211091107 container remove 2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 05:50:29 compute-0 systemd[1]: libpod-conmon-2b16f93baf28f2f5d80b97012c2f1232ffdf64b37ca6eb132c847d3071c16941.scope: Deactivated successfully.
Nov 29 05:50:29 compute-0 sudo[286470]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:29 compute-0 sudo[286635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:50:29 compute-0 sudo[286635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:29 compute-0 sudo[286635]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:29 compute-0 sudo[286660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:50:29 compute-0 sudo[286660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:29 compute-0 sudo[286660]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:29 compute-0 sudo[286685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:50:29 compute-0 sudo[286685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:29 compute-0 sudo[286685]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:29 compute-0 sudo[286710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:50:29 compute-0 sudo[286710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:29 compute-0 podman[286775]: 2025-11-29 05:50:29.706577077 +0000 UTC m=+0.046679002 container create 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 05:50:29 compute-0 systemd[1]: Started libpod-conmon-09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b.scope.
Nov 29 05:50:29 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:50:29 compute-0 podman[286775]: 2025-11-29 05:50:29.68547307 +0000 UTC m=+0.025575025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:50:29 compute-0 podman[286775]: 2025-11-29 05:50:29.784454398 +0000 UTC m=+0.124556343 container init 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:50:29 compute-0 podman[286775]: 2025-11-29 05:50:29.791764443 +0000 UTC m=+0.131866358 container start 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 05:50:29 compute-0 podman[286775]: 2025-11-29 05:50:29.794416227 +0000 UTC m=+0.134518182 container attach 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:50:29 compute-0 pensive_varahamihira[286791]: 167 167
Nov 29 05:50:29 compute-0 systemd[1]: libpod-09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b.scope: Deactivated successfully.
Nov 29 05:50:29 compute-0 podman[286775]: 2025-11-29 05:50:29.799391916 +0000 UTC m=+0.139493851 container died 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:50:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ba44baceee81abe1fc0bc9b9fcc269c91fd0e18217727532dfc6d9f963ee768-merged.mount: Deactivated successfully.
Nov 29 05:50:29 compute-0 podman[286775]: 2025-11-29 05:50:29.83243054 +0000 UTC m=+0.172532465 container remove 09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:50:29 compute-0 systemd[1]: libpod-conmon-09034b1818fd333a8b7d91836aca410728f10f38b4c00c1be70a3f689b68391b.scope: Deactivated successfully.
Nov 29 05:50:30 compute-0 podman[286813]: 2025-11-29 05:50:30.029118985 +0000 UTC m=+0.046756174 container create d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 05:50:30 compute-0 systemd[1]: Started libpod-conmon-d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1.scope.
Nov 29 05:50:30 compute-0 podman[286813]: 2025-11-29 05:50:30.007464505 +0000 UTC m=+0.025101794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:50:30 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecaa9e5809a0573f1cf3b885fc10218e83290cbc6a2575471233d560b5e1ddf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecaa9e5809a0573f1cf3b885fc10218e83290cbc6a2575471233d560b5e1ddf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecaa9e5809a0573f1cf3b885fc10218e83290cbc6a2575471233d560b5e1ddf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecaa9e5809a0573f1cf3b885fc10218e83290cbc6a2575471233d560b5e1ddf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:30 compute-0 podman[286813]: 2025-11-29 05:50:30.14255002 +0000 UTC m=+0.160187229 container init d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:50:30 compute-0 podman[286813]: 2025-11-29 05:50:30.148396752 +0000 UTC m=+0.166033951 container start d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:50:30 compute-0 podman[286813]: 2025-11-29 05:50:30.152175982 +0000 UTC m=+0.169813171 container attach d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 05:50:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:30 compute-0 ceph-mon[75176]: pgmap v1414: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]: {
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:     "0": [
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:         {
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "devices": [
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "/dev/loop3"
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             ],
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_name": "ceph_lv0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_size": "21470642176",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "name": "ceph_lv0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "tags": {
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.cluster_name": "ceph",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.crush_device_class": "",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.encrypted": "0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.osd_id": "0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.type": "block",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.vdo": "0"
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             },
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "type": "block",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "vg_name": "ceph_vg0"
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:         }
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:     ],
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:     "1": [
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:         {
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "devices": [
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "/dev/loop4"
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             ],
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_name": "ceph_lv1",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_size": "21470642176",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "name": "ceph_lv1",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "tags": {
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.cluster_name": "ceph",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.crush_device_class": "",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.encrypted": "0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.osd_id": "1",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.type": "block",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.vdo": "0"
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             },
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "type": "block",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "vg_name": "ceph_vg1"
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:         }
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:     ],
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:     "2": [
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:         {
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "devices": [
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "/dev/loop5"
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             ],
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_name": "ceph_lv2",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_size": "21470642176",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "name": "ceph_lv2",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "tags": {
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.cluster_name": "ceph",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.crush_device_class": "",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.encrypted": "0",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.osd_id": "2",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.type": "block",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:                 "ceph.vdo": "0"
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             },
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "type": "block",
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:             "vg_name": "ceph_vg2"
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:         }
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]:     ]
Nov 29 05:50:30 compute-0 stupefied_wozniak[286830]: }
Nov 29 05:50:30 compute-0 systemd[1]: libpod-d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1.scope: Deactivated successfully.
Nov 29 05:50:30 compute-0 podman[286813]: 2025-11-29 05:50:30.926842893 +0000 UTC m=+0.944480092 container died d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:50:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecaa9e5809a0573f1cf3b885fc10218e83290cbc6a2575471233d560b5e1ddf6-merged.mount: Deactivated successfully.
Nov 29 05:50:30 compute-0 podman[286813]: 2025-11-29 05:50:30.989931739 +0000 UTC m=+1.007568928 container remove d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wozniak, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:50:30 compute-0 systemd[1]: libpod-conmon-d16211dd3b4cc74595da572ee91bb3cd3948ce687258dcd7d330946688b591d1.scope: Deactivated successfully.
Nov 29 05:50:31 compute-0 sudo[286710]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:31 compute-0 sudo[286853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:50:31 compute-0 sudo[286853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:31 compute-0 sudo[286853]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:31 compute-0 sudo[286878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:50:31 compute-0 sudo[286878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:31 compute-0 sudo[286878]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:31 compute-0 sudo[286903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:50:31 compute-0 sudo[286903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:31 compute-0 sudo[286903]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:31 compute-0 sudo[286928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:50:31 compute-0 sudo[286928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:31 compute-0 podman[286994]: 2025-11-29 05:50:31.662087957 +0000 UTC m=+0.038673871 container create cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:50:31 compute-0 systemd[1]: Started libpod-conmon-cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7.scope.
Nov 29 05:50:31 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:50:31 compute-0 podman[286994]: 2025-11-29 05:50:31.643851919 +0000 UTC m=+0.020437853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:50:31 compute-0 podman[286994]: 2025-11-29 05:50:31.741165766 +0000 UTC m=+0.117751690 container init cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 05:50:31 compute-0 podman[286994]: 2025-11-29 05:50:31.746962185 +0000 UTC m=+0.123548089 container start cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:50:31 compute-0 nervous_moore[287010]: 167 167
Nov 29 05:50:31 compute-0 podman[286994]: 2025-11-29 05:50:31.751942715 +0000 UTC m=+0.128528629 container attach cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 05:50:31 compute-0 systemd[1]: libpod-cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7.scope: Deactivated successfully.
Nov 29 05:50:31 compute-0 podman[286994]: 2025-11-29 05:50:31.753256707 +0000 UTC m=+0.129842611 container died cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:50:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0fc78293ae7c5265f0248ddedb17ed1b351872d7e065a51e80bf2b5fffa82ab-merged.mount: Deactivated successfully.
Nov 29 05:50:31 compute-0 podman[286994]: 2025-11-29 05:50:31.788500634 +0000 UTC m=+0.165086548 container remove cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_moore, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 05:50:31 compute-0 systemd[1]: libpod-conmon-cedf76069d2fb59c70e7fe243eb40b3f61972bd9f99bbcac4183d4bf29f9d8e7.scope: Deactivated successfully.
Nov 29 05:50:31 compute-0 podman[287034]: 2025-11-29 05:50:31.940416194 +0000 UTC m=+0.036296613 container create cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:50:31 compute-0 systemd[1]: Started libpod-conmon-cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325.scope.
Nov 29 05:50:32 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c7a638abbd54b40252fd8a981ccca6432486feb2f18e0b7c9f3c46ba354361/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c7a638abbd54b40252fd8a981ccca6432486feb2f18e0b7c9f3c46ba354361/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c7a638abbd54b40252fd8a981ccca6432486feb2f18e0b7c9f3c46ba354361/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34c7a638abbd54b40252fd8a981ccca6432486feb2f18e0b7c9f3c46ba354361/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:50:32 compute-0 podman[287034]: 2025-11-29 05:50:31.925867204 +0000 UTC m=+0.021747633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:50:32 compute-0 podman[287034]: 2025-11-29 05:50:32.022948996 +0000 UTC m=+0.118829455 container init cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 05:50:32 compute-0 podman[287034]: 2025-11-29 05:50:32.032010024 +0000 UTC m=+0.127890423 container start cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:50:32 compute-0 podman[287034]: 2025-11-29 05:50:32.038681664 +0000 UTC m=+0.134562073 container attach cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 29 05:50:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:32 compute-0 ceph-mon[75176]: pgmap v1415: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]: {
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "osd_id": 0,
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "type": "bluestore"
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:     },
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "osd_id": 1,
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "type": "bluestore"
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:     },
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "osd_id": 2,
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:         "type": "bluestore"
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]:     }
Nov 29 05:50:32 compute-0 gracious_aryabhata[287051]: }
Nov 29 05:50:32 compute-0 systemd[1]: libpod-cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325.scope: Deactivated successfully.
Nov 29 05:50:32 compute-0 podman[287034]: 2025-11-29 05:50:32.992845687 +0000 UTC m=+1.088726106 container died cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:50:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-34c7a638abbd54b40252fd8a981ccca6432486feb2f18e0b7c9f3c46ba354361-merged.mount: Deactivated successfully.
Nov 29 05:50:33 compute-0 podman[287034]: 2025-11-29 05:50:33.051712271 +0000 UTC m=+1.147592680 container remove cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:50:33 compute-0 systemd[1]: libpod-conmon-cb4a503b735a140d1b3620821e56d021f2888412b7161f80748be57bcc6b5325.scope: Deactivated successfully.
Nov 29 05:50:33 compute-0 sudo[286928]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:50:33 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:50:33 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:50:33 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:50:33 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev ad2956fe-baef-4782-89ed-a089b5d0114e does not exist
Nov 29 05:50:33 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev e3fe0e5c-21e6-4dfa-9a19-f35fffebac75 does not exist
Nov 29 05:50:33 compute-0 sudo[287098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:50:33 compute-0 sudo[287098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:33 compute-0 sudo[287098]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:33 compute-0 sudo[287123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:50:33 compute-0 sudo[287123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:50:33 compute-0 sudo[287123]: pam_unix(sudo:session): session closed for user root
Nov 29 05:50:34 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:50:34 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:50:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:35 compute-0 ceph-mon[75176]: pgmap v1416: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:36 compute-0 ceph-mon[75176]: pgmap v1417: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:38 compute-0 ceph-mon[75176]: pgmap v1418: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:40 compute-0 ceph-mon[75176]: pgmap v1419: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:50:41
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:50:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:50:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:50:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:50:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:42 compute-0 ceph-mon[75176]: pgmap v1420: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:43 compute-0 nova_compute[254898]: 2025-11-29 05:50:43.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:50:44 compute-0 podman[287148]: 2025-11-29 05:50:44.043159733 +0000 UTC m=+0.093552298 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 05:50:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:44 compute-0 ceph-mon[75176]: pgmap v1421: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:44 compute-0 nova_compute[254898]: 2025-11-29 05:50:44.950 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:50:45 compute-0 nova_compute[254898]: 2025-11-29 05:50:45.073 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:50:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:46 compute-0 ceph-mon[75176]: pgmap v1422: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:46 compute-0 nova_compute[254898]: 2025-11-29 05:50:46.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:50:47 compute-0 podman[287166]: 2025-11-29 05:50:47.022189532 +0000 UTC m=+0.079633524 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 05:50:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:47 compute-0 nova_compute[254898]: 2025-11-29 05:50:47.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:50:47 compute-0 nova_compute[254898]: 2025-11-29 05:50:47.981 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:50:47 compute-0 nova_compute[254898]: 2025-11-29 05:50:47.982 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:50:47 compute-0 nova_compute[254898]: 2025-11-29 05:50:47.982 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:50:47 compute-0 nova_compute[254898]: 2025-11-29 05:50:47.983 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:50:47 compute-0 nova_compute[254898]: 2025-11-29 05:50:47.983 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:50:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:50:48 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/325678523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:50:48 compute-0 nova_compute[254898]: 2025-11-29 05:50:48.409 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:50:48 compute-0 ceph-mon[75176]: pgmap v1423: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:48 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/325678523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:50:48 compute-0 nova_compute[254898]: 2025-11-29 05:50:48.563 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:50:48 compute-0 nova_compute[254898]: 2025-11-29 05:50:48.564 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4944MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:50:48 compute-0 nova_compute[254898]: 2025-11-29 05:50:48.564 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:50:48 compute-0 nova_compute[254898]: 2025-11-29 05:50:48.564 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:50:48 compute-0 nova_compute[254898]: 2025-11-29 05:50:48.632 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:50:48 compute-0 nova_compute[254898]: 2025-11-29 05:50:48.633 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:50:48 compute-0 nova_compute[254898]: 2025-11-29 05:50:48.650 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:50:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:50:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1593395698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:50:49 compute-0 nova_compute[254898]: 2025-11-29 05:50:49.049 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:50:49 compute-0 nova_compute[254898]: 2025-11-29 05:50:49.056 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:50:49 compute-0 nova_compute[254898]: 2025-11-29 05:50:49.074 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:50:49 compute-0 nova_compute[254898]: 2025-11-29 05:50:49.078 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:50:49 compute-0 nova_compute[254898]: 2025-11-29 05:50:49.078 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:50:49 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1593395698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:50:50 compute-0 nova_compute[254898]: 2025-11-29 05:50:50.080 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:50:50 compute-0 nova_compute[254898]: 2025-11-29 05:50:50.081 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:50:50 compute-0 nova_compute[254898]: 2025-11-29 05:50:50.081 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:50:50 compute-0 nova_compute[254898]: 2025-11-29 05:50:50.081 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:50:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:50 compute-0 ceph-mon[75176]: pgmap v1424: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:50:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:50:51 compute-0 nova_compute[254898]: 2025-11-29 05:50:51.955 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:50:51 compute-0 nova_compute[254898]: 2025-11-29 05:50:51.955 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:50:51 compute-0 nova_compute[254898]: 2025-11-29 05:50:51.956 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:50:51 compute-0 nova_compute[254898]: 2025-11-29 05:50:51.978 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:50:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:52 compute-0 ceph-mon[75176]: pgmap v1425: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:53 compute-0 nova_compute[254898]: 2025-11-29 05:50:53.971 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:50:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:54 compute-0 ceph-mon[75176]: pgmap v1426: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:56 compute-0 ceph-mon[75176]: pgmap v1427: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 29 05:50:56 compute-0 podman[287236]: 2025-11-29 05:50:56.997985355 +0000 UTC m=+0.053993188 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 05:50:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:50:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:50:58 compute-0 ceph-mon[75176]: pgmap v1428: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:00 compute-0 ceph-mon[75176]: pgmap v1429: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:02 compute-0 ceph-mon[75176]: pgmap v1430: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:04 compute-0 ceph-mon[75176]: pgmap v1431: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:06 compute-0 ceph-mon[75176]: pgmap v1432: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.888684) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467888788, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1224, "num_deletes": 251, "total_data_size": 1871730, "memory_usage": 1902512, "flush_reason": "Manual Compaction"}
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467900851, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1109578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31037, "largest_seqno": 32260, "table_properties": {"data_size": 1105092, "index_size": 1946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11708, "raw_average_key_size": 20, "raw_value_size": 1095392, "raw_average_value_size": 1931, "num_data_blocks": 89, "num_entries": 567, "num_filter_entries": 567, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764395346, "oldest_key_time": 1764395346, "file_creation_time": 1764395467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 12244 microseconds, and 8003 cpu microseconds.
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.900934) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1109578 bytes OK
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.900965) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.902845) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.902870) EVENT_LOG_v1 {"time_micros": 1764395467902862, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.902894) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1866175, prev total WAL file size 1866175, number of live WAL files 2.
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.904101) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1083KB)], [65(10166KB)]
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467904153, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 11519753, "oldest_snapshot_seqno": -1}
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6226 keys, 8929158 bytes, temperature: kUnknown
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467960387, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 8929158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8888290, "index_size": 24182, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 156584, "raw_average_key_size": 25, "raw_value_size": 8777459, "raw_average_value_size": 1409, "num_data_blocks": 990, "num_entries": 6226, "num_filter_entries": 6226, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764395467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.960595) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 8929158 bytes
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.961787) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.6 rd, 158.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.9 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(18.4) write-amplify(8.0) OK, records in: 6685, records dropped: 459 output_compression: NoCompression
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.961801) EVENT_LOG_v1 {"time_micros": 1764395467961794, "job": 36, "event": "compaction_finished", "compaction_time_micros": 56302, "compaction_time_cpu_micros": 20519, "output_level": 6, "num_output_files": 1, "total_output_size": 8929158, "num_input_records": 6685, "num_output_records": 6226, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467962043, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395467963467, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.904034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.963559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.963565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.963567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.963569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:51:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:51:07.963571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:51:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:08 compute-0 ceph-mon[75176]: pgmap v1433: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:10 compute-0 ceph-mon[75176]: pgmap v1434: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:10 compute-0 sshd-session[287256]: Invalid user dolphinscheduler from 45.120.216.232 port 51530
Nov 29 05:51:10 compute-0 sshd-session[287256]: Received disconnect from 45.120.216.232 port 51530:11: Bye Bye [preauth]
Nov 29 05:51:10 compute-0 sshd-session[287256]: Disconnected from invalid user dolphinscheduler 45.120.216.232 port 51530 [preauth]
Nov 29 05:51:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:51:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:51:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:51:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:51:11 compute-0 sshd-session[287258]: Received disconnect from 45.78.219.254 port 54886:11: Bye Bye [preauth]
Nov 29 05:51:11 compute-0 sshd-session[287258]: Disconnected from authenticating user root 45.78.219.254 port 54886 [preauth]
Nov 29 05:51:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:51:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:51:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:12 compute-0 ceph-mon[75176]: pgmap v1435: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:51:13.766 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:51:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:51:13.766 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:51:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:51:13.766 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:51:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:51:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3386820668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:51:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:51:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3386820668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:51:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3386820668' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:51:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/3386820668' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:51:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:15 compute-0 podman[287260]: 2025-11-29 05:51:15.004144791 +0000 UTC m=+0.059316995 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:51:15 compute-0 ceph-mon[75176]: pgmap v1436: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:16 compute-0 ceph-mon[75176]: pgmap v1437: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:18 compute-0 podman[287278]: 2025-11-29 05:51:18.066355338 +0000 UTC m=+0.115706840 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 05:51:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:18 compute-0 ceph-mon[75176]: pgmap v1438: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:20 compute-0 ceph-mon[75176]: pgmap v1439: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:22 compute-0 ceph-mon[75176]: pgmap v1440: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:24 compute-0 ceph-mon[75176]: pgmap v1441: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:26 compute-0 ceph-mon[75176]: pgmap v1442: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:28 compute-0 podman[287304]: 2025-11-29 05:51:28.002128257 +0000 UTC m=+0.053388464 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:51:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:28 compute-0 ceph-mon[75176]: pgmap v1443: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:30 compute-0 ceph-mon[75176]: pgmap v1444: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:31 compute-0 sshd-session[287325]: Invalid user user from 45.249.245.22 port 48344
Nov 29 05:51:32 compute-0 sshd-session[287325]: Received disconnect from 45.249.245.22 port 48344:11: Bye Bye [preauth]
Nov 29 05:51:32 compute-0 sshd-session[287325]: Disconnected from invalid user user 45.249.245.22 port 48344 [preauth]
Nov 29 05:51:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:32 compute-0 ceph-mon[75176]: pgmap v1445: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:33 compute-0 sudo[287327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:51:33 compute-0 sudo[287327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:33 compute-0 sudo[287327]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:33 compute-0 sudo[287352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:51:33 compute-0 sudo[287352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:33 compute-0 sudo[287352]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:33 compute-0 sudo[287377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:51:33 compute-0 sudo[287377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:33 compute-0 sudo[287377]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:33 compute-0 sudo[287402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 29 05:51:33 compute-0 sudo[287402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:34 compute-0 podman[287497]: 2025-11-29 05:51:34.016128699 +0000 UTC m=+0.077324248 container exec 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 05:51:34 compute-0 podman[287497]: 2025-11-29 05:51:34.138663043 +0000 UTC m=+0.199858622 container exec_died 8221d7b65f9dee04deed2d140d35ab142f6ca067839c8ec1597534673bff6113 (image=quay.io/ceph/ceph:v18, name=ceph-93f82912-647c-5e78-b081-707d0a2966d8-mon-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 05:51:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:34 compute-0 ceph-mon[75176]: pgmap v1446: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:34 compute-0 sudo[287402]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:51:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:51:34 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:51:34 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:51:34 compute-0 sudo[287658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:51:34 compute-0 sudo[287658]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:35 compute-0 sudo[287658]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:35 compute-0 sudo[287683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:51:35 compute-0 sudo[287683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:35 compute-0 sudo[287683]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:35 compute-0 sudo[287708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:51:35 compute-0 sudo[287708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:35 compute-0 sudo[287708]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:35 compute-0 sudo[287733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:51:35 compute-0 sudo[287733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:35 compute-0 sudo[287733]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 05:51:35 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:51:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:51:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:51:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:51:35 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:51:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:51:35 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:51:35 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 6bb99168-2cb4-481a-bd80-ca589e21c9a2 does not exist
Nov 29 05:51:35 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 870a3744-574f-4cce-be67-1e096a301f14 does not exist
Nov 29 05:51:35 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a61a9331-df76-4314-8462-e4e9bbee2498 does not exist
Nov 29 05:51:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:51:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:51:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:51:35 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:51:35 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:51:35 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:51:35 compute-0 sudo[287788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:51:35 compute-0 sudo[287788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:35 compute-0 sudo[287788]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:35 compute-0 sudo[287813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:51:35 compute-0 sudo[287813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:35 compute-0 sudo[287813]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:35 compute-0 sudo[287838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:51:35 compute-0 sudo[287838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:35 compute-0 sudo[287838]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:35 compute-0 sudo[287863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:51:35 compute-0 sudo[287863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:51:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:51:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 05:51:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:51:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:51:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:51:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:51:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:51:35 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:51:36 compute-0 podman[287927]: 2025-11-29 05:51:36.233056119 +0000 UTC m=+0.039841838 container create b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:51:36 compute-0 systemd[1]: Started libpod-conmon-b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b.scope.
Nov 29 05:51:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:51:36 compute-0 podman[287927]: 2025-11-29 05:51:36.215012816 +0000 UTC m=+0.021798585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:51:36 compute-0 podman[287927]: 2025-11-29 05:51:36.311597766 +0000 UTC m=+0.118383515 container init b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 05:51:36 compute-0 podman[287927]: 2025-11-29 05:51:36.316945815 +0000 UTC m=+0.123731534 container start b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 05:51:36 compute-0 podman[287927]: 2025-11-29 05:51:36.320030578 +0000 UTC m=+0.126816347 container attach b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:51:36 compute-0 eager_bhabha[287943]: 167 167
Nov 29 05:51:36 compute-0 systemd[1]: libpod-b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b.scope: Deactivated successfully.
Nov 29 05:51:36 compute-0 podman[287927]: 2025-11-29 05:51:36.322048548 +0000 UTC m=+0.128834287 container died b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 05:51:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-64a2d5be05568cb2c643080b0fdee7b9074c9390179112758e887be3505a170f-merged.mount: Deactivated successfully.
Nov 29 05:51:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:36 compute-0 podman[287927]: 2025-11-29 05:51:36.362646983 +0000 UTC m=+0.169432692 container remove b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bhabha, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:51:36 compute-0 systemd[1]: libpod-conmon-b88e7dbfa99c0e5362752afd2a61010255fc1074479d65c529dd64bb7435de1b.scope: Deactivated successfully.
Nov 29 05:51:36 compute-0 podman[287969]: 2025-11-29 05:51:36.544974903 +0000 UTC m=+0.053302412 container create 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:51:36 compute-0 systemd[1]: Started libpod-conmon-7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac.scope.
Nov 29 05:51:36 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:36 compute-0 podman[287969]: 2025-11-29 05:51:36.620984799 +0000 UTC m=+0.129312358 container init 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:51:36 compute-0 podman[287969]: 2025-11-29 05:51:36.52529577 +0000 UTC m=+0.033623279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:51:36 compute-0 podman[287969]: 2025-11-29 05:51:36.628648823 +0000 UTC m=+0.136976342 container start 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 05:51:36 compute-0 podman[287969]: 2025-11-29 05:51:36.632140657 +0000 UTC m=+0.140468176 container attach 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:51:36 compute-0 ceph-mon[75176]: pgmap v1447: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:37 compute-0 sshd-session[287960]: Invalid user user from 154.221.27.234 port 48583
Nov 29 05:51:37 compute-0 vigorous_archimedes[287986]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:51:37 compute-0 vigorous_archimedes[287986]: --> relative data size: 1.0
Nov 29 05:51:37 compute-0 vigorous_archimedes[287986]: --> All data devices are unavailable
Nov 29 05:51:37 compute-0 systemd[1]: libpod-7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac.scope: Deactivated successfully.
Nov 29 05:51:37 compute-0 podman[287969]: 2025-11-29 05:51:37.628580646 +0000 UTC m=+1.136908195 container died 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 05:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7cd4e9e4c97cee29b3bb92cd5414243c4117b37d136b06beb420a0f280ecfb6-merged.mount: Deactivated successfully.
Nov 29 05:51:37 compute-0 sshd-session[287960]: Received disconnect from 154.221.27.234 port 48583:11: Bye Bye [preauth]
Nov 29 05:51:37 compute-0 sshd-session[287960]: Disconnected from invalid user user 154.221.27.234 port 48583 [preauth]
Nov 29 05:51:37 compute-0 podman[287969]: 2025-11-29 05:51:37.674418117 +0000 UTC m=+1.182745626 container remove 7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:51:37 compute-0 systemd[1]: libpod-conmon-7db544d187f6e35bf1dad01236692db2cd019cb8fc05030aff30143a889284ac.scope: Deactivated successfully.
Nov 29 05:51:37 compute-0 sudo[287863]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:37 compute-0 sudo[288029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:51:37 compute-0 sudo[288029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:37 compute-0 sudo[288029]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:37 compute-0 sudo[288054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:51:37 compute-0 sudo[288054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:37 compute-0 sudo[288054]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:37 compute-0 sudo[288079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:51:37 compute-0 sudo[288079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:37 compute-0 sudo[288079]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:37 compute-0 sudo[288104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:51:37 compute-0 sudo[288104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:38 compute-0 podman[288169]: 2025-11-29 05:51:38.213713103 +0000 UTC m=+0.034581802 container create d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 05:51:38 compute-0 systemd[1]: Started libpod-conmon-d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c.scope.
Nov 29 05:51:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:51:38 compute-0 podman[288169]: 2025-11-29 05:51:38.200149517 +0000 UTC m=+0.021018236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:51:38 compute-0 podman[288169]: 2025-11-29 05:51:38.302288951 +0000 UTC m=+0.123157680 container init d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:51:38 compute-0 podman[288169]: 2025-11-29 05:51:38.309958946 +0000 UTC m=+0.130827645 container start d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 05:51:38 compute-0 podman[288169]: 2025-11-29 05:51:38.313504311 +0000 UTC m=+0.134373040 container attach d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:51:38 compute-0 sharp_edison[288185]: 167 167
Nov 29 05:51:38 compute-0 systemd[1]: libpod-d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c.scope: Deactivated successfully.
Nov 29 05:51:38 compute-0 podman[288169]: 2025-11-29 05:51:38.314931725 +0000 UTC m=+0.135800424 container died d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ced125e79e99ee2c43c8308b39ef910c5fb9a10050da171033bb02f074709c98-merged.mount: Deactivated successfully.
Nov 29 05:51:38 compute-0 podman[288169]: 2025-11-29 05:51:38.348505741 +0000 UTC m=+0.169374440 container remove d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:51:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:38 compute-0 systemd[1]: libpod-conmon-d3b871a78239823a4b01337db2641deb46c25651b7f808b946dfd37abf0f243c.scope: Deactivated successfully.
Nov 29 05:51:38 compute-0 ceph-mon[75176]: pgmap v1448: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:38 compute-0 podman[288210]: 2025-11-29 05:51:38.49660344 +0000 UTC m=+0.040228718 container create 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 05:51:38 compute-0 systemd[1]: Started libpod-conmon-3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a.scope.
Nov 29 05:51:38 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2cde08161da0160c2d64f8860d103fbe1e6bba0351a2194f45dcc68e60e73b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2cde08161da0160c2d64f8860d103fbe1e6bba0351a2194f45dcc68e60e73b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2cde08161da0160c2d64f8860d103fbe1e6bba0351a2194f45dcc68e60e73b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2cde08161da0160c2d64f8860d103fbe1e6bba0351a2194f45dcc68e60e73b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:38 compute-0 podman[288210]: 2025-11-29 05:51:38.568739292 +0000 UTC m=+0.112364580 container init 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 05:51:38 compute-0 podman[288210]: 2025-11-29 05:51:38.478756511 +0000 UTC m=+0.022381839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:51:38 compute-0 podman[288210]: 2025-11-29 05:51:38.5753085 +0000 UTC m=+0.118933778 container start 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:51:38 compute-0 podman[288210]: 2025-11-29 05:51:38.578178329 +0000 UTC m=+0.121803617 container attach 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 05:51:39 compute-0 charming_babbage[288226]: {
Nov 29 05:51:39 compute-0 charming_babbage[288226]:     "0": [
Nov 29 05:51:39 compute-0 charming_babbage[288226]:         {
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "devices": [
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "/dev/loop3"
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             ],
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_name": "ceph_lv0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_size": "21470642176",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "name": "ceph_lv0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "tags": {
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.cluster_name": "ceph",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.crush_device_class": "",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.encrypted": "0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.osd_id": "0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.type": "block",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.vdo": "0"
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             },
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "type": "block",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "vg_name": "ceph_vg0"
Nov 29 05:51:39 compute-0 charming_babbage[288226]:         }
Nov 29 05:51:39 compute-0 charming_babbage[288226]:     ],
Nov 29 05:51:39 compute-0 charming_babbage[288226]:     "1": [
Nov 29 05:51:39 compute-0 charming_babbage[288226]:         {
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "devices": [
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "/dev/loop4"
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             ],
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_name": "ceph_lv1",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_size": "21470642176",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "name": "ceph_lv1",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "tags": {
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.cluster_name": "ceph",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.crush_device_class": "",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.encrypted": "0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.osd_id": "1",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.type": "block",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.vdo": "0"
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             },
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "type": "block",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "vg_name": "ceph_vg1"
Nov 29 05:51:39 compute-0 charming_babbage[288226]:         }
Nov 29 05:51:39 compute-0 charming_babbage[288226]:     ],
Nov 29 05:51:39 compute-0 charming_babbage[288226]:     "2": [
Nov 29 05:51:39 compute-0 charming_babbage[288226]:         {
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "devices": [
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "/dev/loop5"
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             ],
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_name": "ceph_lv2",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_size": "21470642176",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "name": "ceph_lv2",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "tags": {
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.cluster_name": "ceph",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.crush_device_class": "",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.encrypted": "0",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.osd_id": "2",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.type": "block",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:                 "ceph.vdo": "0"
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             },
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "type": "block",
Nov 29 05:51:39 compute-0 charming_babbage[288226]:             "vg_name": "ceph_vg2"
Nov 29 05:51:39 compute-0 charming_babbage[288226]:         }
Nov 29 05:51:39 compute-0 charming_babbage[288226]:     ]
Nov 29 05:51:39 compute-0 charming_babbage[288226]: }
Nov 29 05:51:39 compute-0 systemd[1]: libpod-3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a.scope: Deactivated successfully.
Nov 29 05:51:39 compute-0 podman[288210]: 2025-11-29 05:51:39.317610943 +0000 UTC m=+0.861236221 container died 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:51:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e2cde08161da0160c2d64f8860d103fbe1e6bba0351a2194f45dcc68e60e73b-merged.mount: Deactivated successfully.
Nov 29 05:51:39 compute-0 podman[288210]: 2025-11-29 05:51:39.362736148 +0000 UTC m=+0.906361426 container remove 3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:51:39 compute-0 systemd[1]: libpod-conmon-3bb9d9ac953c5203b91f8707cb30ac9ed7ba1c3b1f025db4798130c09cdf3c2a.scope: Deactivated successfully.
Nov 29 05:51:39 compute-0 sudo[288104]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:39 compute-0 sudo[288248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:51:39 compute-0 sudo[288248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:39 compute-0 sudo[288248]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:39 compute-0 sudo[288273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:51:39 compute-0 sudo[288273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:39 compute-0 sudo[288273]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:39 compute-0 sudo[288298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:51:39 compute-0 sudo[288298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:39 compute-0 sudo[288298]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:39 compute-0 sudo[288323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:51:39 compute-0 sudo[288323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:39 compute-0 podman[288387]: 2025-11-29 05:51:39.906553913 +0000 UTC m=+0.034540631 container create 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:51:39 compute-0 systemd[1]: Started libpod-conmon-442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc.scope.
Nov 29 05:51:39 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:51:39 compute-0 podman[288387]: 2025-11-29 05:51:39.975854237 +0000 UTC m=+0.103840975 container init 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:51:39 compute-0 podman[288387]: 2025-11-29 05:51:39.983101392 +0000 UTC m=+0.111088110 container start 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 05:51:39 compute-0 podman[288387]: 2025-11-29 05:51:39.98637316 +0000 UTC m=+0.114359878 container attach 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:51:39 compute-0 busy_mcnulty[288404]: 167 167
Nov 29 05:51:39 compute-0 podman[288387]: 2025-11-29 05:51:39.892755191 +0000 UTC m=+0.020741929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:51:39 compute-0 systemd[1]: libpod-442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc.scope: Deactivated successfully.
Nov 29 05:51:39 compute-0 podman[288387]: 2025-11-29 05:51:39.988091981 +0000 UTC m=+0.116078699 container died 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f1629f967f91c5a7a2e86ad4915225d1620ed614859ed1ea6bf2e0dc559c860-merged.mount: Deactivated successfully.
Nov 29 05:51:40 compute-0 podman[288387]: 2025-11-29 05:51:40.019396704 +0000 UTC m=+0.147383422 container remove 442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 05:51:40 compute-0 systemd[1]: libpod-conmon-442d473a2ecb90e6e5293869cfc6240fbec439874ddb36998ece580421ca69fc.scope: Deactivated successfully.
Nov 29 05:51:40 compute-0 podman[288427]: 2025-11-29 05:51:40.155221357 +0000 UTC m=+0.034706135 container create 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:51:40 compute-0 systemd[1]: Started libpod-conmon-03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14.scope.
Nov 29 05:51:40 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b64d2cd0b8cb65689ef7459e84194f4846c0892374d3732144fd047c02ea7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b64d2cd0b8cb65689ef7459e84194f4846c0892374d3732144fd047c02ea7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b64d2cd0b8cb65689ef7459e84194f4846c0892374d3732144fd047c02ea7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b64d2cd0b8cb65689ef7459e84194f4846c0892374d3732144fd047c02ea7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:51:40 compute-0 podman[288427]: 2025-11-29 05:51:40.234207224 +0000 UTC m=+0.113692032 container init 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:51:40 compute-0 podman[288427]: 2025-11-29 05:51:40.139459818 +0000 UTC m=+0.018944616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:51:40 compute-0 podman[288427]: 2025-11-29 05:51:40.242808191 +0000 UTC m=+0.122292969 container start 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 05:51:40 compute-0 podman[288427]: 2025-11-29 05:51:40.247338179 +0000 UTC m=+0.126822957 container attach 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:51:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:40 compute-0 ceph-mon[75176]: pgmap v1449: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:41 compute-0 sharp_sammet[288444]: {
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "osd_id": 0,
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "type": "bluestore"
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:     },
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "osd_id": 1,
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "type": "bluestore"
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:     },
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "osd_id": 2,
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:         "type": "bluestore"
Nov 29 05:51:41 compute-0 sharp_sammet[288444]:     }
Nov 29 05:51:41 compute-0 sharp_sammet[288444]: }
Nov 29 05:51:41 compute-0 systemd[1]: libpod-03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14.scope: Deactivated successfully.
Nov 29 05:51:41 compute-0 podman[288427]: 2025-11-29 05:51:41.300503471 +0000 UTC m=+1.179988319 container died 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 05:51:41 compute-0 systemd[1]: libpod-03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14.scope: Consumed 1.067s CPU time.
Nov 29 05:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b64d2cd0b8cb65689ef7459e84194f4846c0892374d3732144fd047c02ea7c-merged.mount: Deactivated successfully.
Nov 29 05:51:41 compute-0 podman[288427]: 2025-11-29 05:51:41.365069293 +0000 UTC m=+1.244554081 container remove 03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 05:51:41 compute-0 systemd[1]: libpod-conmon-03f2700f0d8e581c6ddc907fb62a401313ec84d197b40e49a57307393a6a6d14.scope: Deactivated successfully.
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:51:41
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'images', 'vms', 'default.rgw.meta', '.mgr']
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:51:41 compute-0 sudo[288323]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:51:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:51:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:51:41 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:51:41 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev d94669fa-0adf-40a2-9b8b-af5d38734830 does not exist
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 7aa1b7b2-4002-4ff8-9417-15a95e86c32a does not exist
Nov 29 05:51:41 compute-0 sudo[288490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:51:41 compute-0 sudo[288490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:41 compute-0 sudo[288490]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:41 compute-0 sudo[288515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:51:41 compute-0 sudo[288515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:51:41 compute-0 sudo[288515]: pam_unix(sudo:session): session closed for user root
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:51:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:51:41 compute-0 sshd-session[288457]: Invalid user kiosk from 152.32.145.111 port 52756
Nov 29 05:51:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:51:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:51:42 compute-0 sshd-session[288457]: Received disconnect from 152.32.145.111 port 52756:11: Bye Bye [preauth]
Nov 29 05:51:42 compute-0 sshd-session[288457]: Disconnected from invalid user kiosk 152.32.145.111 port 52756 [preauth]
Nov 29 05:51:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:51:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:51:42 compute-0 ceph-mon[75176]: pgmap v1450: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:44 compute-0 ceph-mon[75176]: pgmap v1451: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:44 compute-0 nova_compute[254898]: 2025-11-29 05:51:44.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:51:44 compute-0 nova_compute[254898]: 2025-11-29 05:51:44.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:51:46 compute-0 podman[288540]: 2025-11-29 05:51:46.004600695 +0000 UTC m=+0.057164375 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 05:51:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:46 compute-0 ceph-mon[75176]: pgmap v1452: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:48 compute-0 ceph-mon[75176]: pgmap v1453: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:48 compute-0 nova_compute[254898]: 2025-11-29 05:51:48.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:51:49 compute-0 podman[288561]: 2025-11-29 05:51:49.056820493 +0000 UTC m=+0.110897915 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 05:51:49 compute-0 nova_compute[254898]: 2025-11-29 05:51:49.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:51:49 compute-0 nova_compute[254898]: 2025-11-29 05:51:49.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:51:49 compute-0 nova_compute[254898]: 2025-11-29 05:51:49.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:51:49 compute-0 nova_compute[254898]: 2025-11-29 05:51:49.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:51:49 compute-0 nova_compute[254898]: 2025-11-29 05:51:49.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:51:49 compute-0 nova_compute[254898]: 2025-11-29 05:51:49.983 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:51:49 compute-0 nova_compute[254898]: 2025-11-29 05:51:49.983 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:51:49 compute-0 nova_compute[254898]: 2025-11-29 05:51:49.983 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:51:49 compute-0 nova_compute[254898]: 2025-11-29 05:51:49.983 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:51:49 compute-0 nova_compute[254898]: 2025-11-29 05:51:49.984 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:51:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:51:50 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542259907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:51:50 compute-0 nova_compute[254898]: 2025-11-29 05:51:50.421 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:51:50 compute-0 ceph-mon[75176]: pgmap v1454: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:50 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/542259907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:51:50 compute-0 sshd-session[288588]: Invalid user testftp from 192.161.60.110 port 32890
Nov 29 05:51:50 compute-0 nova_compute[254898]: 2025-11-29 05:51:50.637 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:51:50 compute-0 nova_compute[254898]: 2025-11-29 05:51:50.638 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4969MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:51:50 compute-0 nova_compute[254898]: 2025-11-29 05:51:50.638 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:51:50 compute-0 nova_compute[254898]: 2025-11-29 05:51:50.638 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:51:50 compute-0 sshd-session[288588]: Received disconnect from 192.161.60.110 port 32890:11: Bye Bye [preauth]
Nov 29 05:51:50 compute-0 sshd-session[288588]: Disconnected from invalid user testftp 192.161.60.110 port 32890 [preauth]
Nov 29 05:51:50 compute-0 nova_compute[254898]: 2025-11-29 05:51:50.686 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:51:50 compute-0 nova_compute[254898]: 2025-11-29 05:51:50.687 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:51:50 compute-0 nova_compute[254898]: 2025-11-29 05:51:50.699 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:51:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:51:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1330873549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:51:51 compute-0 nova_compute[254898]: 2025-11-29 05:51:51.111 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:51:51 compute-0 nova_compute[254898]: 2025-11-29 05:51:51.116 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:51:51 compute-0 nova_compute[254898]: 2025-11-29 05:51:51.153 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:51:51 compute-0 nova_compute[254898]: 2025-11-29 05:51:51.155 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:51:51 compute-0 nova_compute[254898]: 2025-11-29 05:51:51.155 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.517s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:51:51 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1330873549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:51:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:51:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:52 compute-0 ceph-mon[75176]: pgmap v1455: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:53 compute-0 nova_compute[254898]: 2025-11-29 05:51:53.156 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:51:53 compute-0 nova_compute[254898]: 2025-11-29 05:51:53.156 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:51:53 compute-0 nova_compute[254898]: 2025-11-29 05:51:53.157 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:51:53 compute-0 nova_compute[254898]: 2025-11-29 05:51:53.182 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:51:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:54 compute-0 ceph-mon[75176]: pgmap v1456: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:54 compute-0 nova_compute[254898]: 2025-11-29 05:51:54.975 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:51:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:56 compute-0 ceph-mon[75176]: pgmap v1457: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:51:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:58 compute-0 ceph-mon[75176]: pgmap v1458: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:51:59 compute-0 podman[288633]: 2025-11-29 05:51:59.031988328 +0000 UTC m=+0.078513619 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 05:52:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:00 compute-0 ceph-mon[75176]: pgmap v1459: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:02 compute-0 ceph-mon[75176]: pgmap v1460: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:04 compute-0 ceph-mon[75176]: pgmap v1461: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:06 compute-0 ceph-mon[75176]: pgmap v1462: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:07 compute-0 sshd-session[288653]: Invalid user python from 103.147.211.2 port 58674
Nov 29 05:52:07 compute-0 sshd-session[288653]: Received disconnect from 103.147.211.2 port 58674:11: Bye Bye [preauth]
Nov 29 05:52:07 compute-0 sshd-session[288653]: Disconnected from invalid user python 103.147.211.2 port 58674 [preauth]
Nov 29 05:52:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:08 compute-0 ceph-mon[75176]: pgmap v1463: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:10 compute-0 ceph-mon[75176]: pgmap v1464: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:52:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:52:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:52:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:52:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:52:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:52:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:12 compute-0 ceph-mon[75176]: pgmap v1465: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:52:13.767 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:52:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:52:13.767 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:52:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:52:13.767 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:52:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:52:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1248572375' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:52:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:52:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1248572375' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:52:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1248572375' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:52:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/1248572375' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:52:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:15 compute-0 ceph-mon[75176]: pgmap v1466: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:16 compute-0 ceph-mon[75176]: pgmap v1467: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:17 compute-0 podman[288655]: 2025-11-29 05:52:17.005960378 +0000 UTC m=+0.057842341 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:52:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:18 compute-0 ceph-mon[75176]: pgmap v1468: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:20 compute-0 podman[288677]: 2025-11-29 05:52:20.02803629 +0000 UTC m=+0.084364248 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:52:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:20 compute-0 ceph-mon[75176]: pgmap v1469: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:22 compute-0 ceph-mon[75176]: pgmap v1470: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:24 compute-0 ceph-mon[75176]: pgmap v1471: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:26 compute-0 ceph-mon[75176]: pgmap v1472: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:27 compute-0 sshd-session[288703]: Invalid user casaos from 45.120.216.232 port 50424
Nov 29 05:52:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:27 compute-0 sshd-session[288703]: Received disconnect from 45.120.216.232 port 50424:11: Bye Bye [preauth]
Nov 29 05:52:27 compute-0 sshd-session[288703]: Disconnected from invalid user casaos 45.120.216.232 port 50424 [preauth]
Nov 29 05:52:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:28 compute-0 ceph-mon[75176]: pgmap v1473: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:30 compute-0 podman[288705]: 2025-11-29 05:52:30.003502514 +0000 UTC m=+0.056489508 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 05:52:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:30 compute-0 ceph-mon[75176]: pgmap v1474: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:32 compute-0 ceph-mon[75176]: pgmap v1475: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:34 compute-0 ceph-mon[75176]: pgmap v1476: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:36 compute-0 ceph-mon[75176]: pgmap v1477: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:38 compute-0 ceph-mon[75176]: pgmap v1478: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:40 compute-0 ceph-mon[75176]: pgmap v1479: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:52:41
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups', 'vms', '.mgr', '.rgw.root', 'default.rgw.log', 'images', 'cephfs.cephfs.meta']
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:52:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:52:41 compute-0 sudo[288724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:52:41 compute-0 sudo[288724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:41 compute-0 sudo[288724]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:41 compute-0 sudo[288749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:52:41 compute-0 sudo[288749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:41 compute-0 sudo[288749]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:41 compute-0 sudo[288774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:52:41 compute-0 sudo[288774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:41 compute-0 sudo[288774]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:41 compute-0 sudo[288799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:52:41 compute-0 sudo[288799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:52:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:52:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:42 compute-0 sudo[288799]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:52:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:52:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:52:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:52:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:52:42 compute-0 ceph-mon[75176]: pgmap v1480: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:52:42 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:52:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:52:42 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 396c092f-3c70-4bab-98c4-ca27d561a1ba does not exist
Nov 29 05:52:42 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 36d03458-56d3-4860-b0e7-d5caa5f9b2d1 does not exist
Nov 29 05:52:42 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 6cf257fc-7c2a-4535-b3b1-b87c2d05c62f does not exist
Nov 29 05:52:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:52:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:52:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:52:42 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:52:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:52:42 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:52:42 compute-0 sudo[288857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:52:42 compute-0 sudo[288857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:42 compute-0 sudo[288857]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:42 compute-0 sudo[288882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:52:42 compute-0 sudo[288882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:42 compute-0 sudo[288882]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:42 compute-0 sudo[288907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:52:42 compute-0 sudo[288907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:42 compute-0 sudo[288907]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:42 compute-0 sudo[288932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:52:42 compute-0 sudo[288932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:43 compute-0 podman[288998]: 2025-11-29 05:52:43.175025402 +0000 UTC m=+0.062934207 container create c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 05:52:43 compute-0 systemd[1]: Started libpod-conmon-c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98.scope.
Nov 29 05:52:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:52:43 compute-0 podman[288998]: 2025-11-29 05:52:43.154221814 +0000 UTC m=+0.042130639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:52:43 compute-0 podman[288998]: 2025-11-29 05:52:43.257448744 +0000 UTC m=+0.145357579 container init c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 05:52:43 compute-0 podman[288998]: 2025-11-29 05:52:43.264561298 +0000 UTC m=+0.152470103 container start c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 05:52:43 compute-0 podman[288998]: 2025-11-29 05:52:43.267722095 +0000 UTC m=+0.155630910 container attach c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:52:43 compute-0 jolly_dhawan[289014]: 167 167
Nov 29 05:52:43 compute-0 systemd[1]: libpod-c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98.scope: Deactivated successfully.
Nov 29 05:52:43 compute-0 podman[288998]: 2025-11-29 05:52:43.273062035 +0000 UTC m=+0.160970880 container died c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 05:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8133a71ab7ddbdedb5bf9586d17a26eba490d48fed745902f815b369c4ef9f17-merged.mount: Deactivated successfully.
Nov 29 05:52:43 compute-0 podman[288998]: 2025-11-29 05:52:43.318407923 +0000 UTC m=+0.206316718 container remove c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 05:52:43 compute-0 systemd[1]: libpod-conmon-c6832ea93120fac9f9594cc9b2dfc1639f97e2ba0f59d3f5771a83a76f467f98.scope: Deactivated successfully.
Nov 29 05:52:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:52:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:52:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:52:43 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:52:43 compute-0 podman[289039]: 2025-11-29 05:52:43.548915038 +0000 UTC m=+0.069536377 container create 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:52:43 compute-0 systemd[1]: Started libpod-conmon-9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2.scope.
Nov 29 05:52:43 compute-0 podman[289039]: 2025-11-29 05:52:43.523261983 +0000 UTC m=+0.043883332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:52:43 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:43 compute-0 podman[289039]: 2025-11-29 05:52:43.655412498 +0000 UTC m=+0.176033817 container init 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:52:43 compute-0 podman[289039]: 2025-11-29 05:52:43.662365388 +0000 UTC m=+0.182986687 container start 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 05:52:43 compute-0 podman[289039]: 2025-11-29 05:52:43.666294804 +0000 UTC m=+0.186916103 container attach 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:52:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:44 compute-0 ceph-mon[75176]: pgmap v1481: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:44 compute-0 kind_mirzakhani[289056]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:52:44 compute-0 kind_mirzakhani[289056]: --> relative data size: 1.0
Nov 29 05:52:44 compute-0 kind_mirzakhani[289056]: --> All data devices are unavailable
Nov 29 05:52:44 compute-0 systemd[1]: libpod-9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2.scope: Deactivated successfully.
Nov 29 05:52:44 compute-0 podman[289085]: 2025-11-29 05:52:44.729430303 +0000 UTC m=+0.024189291 container died 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:52:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8e3268e07915e8fe745f72ece57b818dd5fea2876ed2cfbfe161b7c22346e48-merged.mount: Deactivated successfully.
Nov 29 05:52:44 compute-0 podman[289085]: 2025-11-29 05:52:44.769082772 +0000 UTC m=+0.063841740 container remove 9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 05:52:44 compute-0 systemd[1]: libpod-conmon-9a3bbef42193b11f3c696ce3c5555582e7468123732223ab19f28844e4d0c1d2.scope: Deactivated successfully.
Nov 29 05:52:44 compute-0 sudo[288932]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:44 compute-0 sudo[289100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:52:44 compute-0 sudo[289100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:44 compute-0 sudo[289100]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:44 compute-0 sudo[289125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:52:44 compute-0 sudo[289125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:44 compute-0 sudo[289125]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:44 compute-0 sudo[289150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:52:44 compute-0 sudo[289150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:44 compute-0 sudo[289150]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:44 compute-0 sudo[289175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:52:44 compute-0 sudo[289175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:45 compute-0 podman[289241]: 2025-11-29 05:52:45.298409941 +0000 UTC m=+0.039163386 container create 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 05:52:45 compute-0 systemd[1]: Started libpod-conmon-046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad.scope.
Nov 29 05:52:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:52:45 compute-0 podman[289241]: 2025-11-29 05:52:45.368848801 +0000 UTC m=+0.109602246 container init 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:52:45 compute-0 podman[289241]: 2025-11-29 05:52:45.375083522 +0000 UTC m=+0.115836947 container start 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:52:45 compute-0 podman[289241]: 2025-11-29 05:52:45.378039065 +0000 UTC m=+0.118792520 container attach 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 05:52:45 compute-0 podman[289241]: 2025-11-29 05:52:45.283082927 +0000 UTC m=+0.023836382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:52:45 compute-0 practical_faraday[289259]: 167 167
Nov 29 05:52:45 compute-0 systemd[1]: libpod-046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad.scope: Deactivated successfully.
Nov 29 05:52:45 compute-0 podman[289241]: 2025-11-29 05:52:45.380381582 +0000 UTC m=+0.121135047 container died 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 05:52:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a986391aee65a823ce263a8f937ba8f34b5e8c9b7db025ca0d8920c22ac1ee6-merged.mount: Deactivated successfully.
Nov 29 05:52:45 compute-0 podman[289241]: 2025-11-29 05:52:45.413417928 +0000 UTC m=+0.154171353 container remove 046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 05:52:45 compute-0 systemd[1]: libpod-conmon-046facc718bfaff375ab2887ff63788cb82cfa0d72041cf5753eecec76c00fad.scope: Deactivated successfully.
Nov 29 05:52:45 compute-0 podman[289284]: 2025-11-29 05:52:45.579850971 +0000 UTC m=+0.035013136 container create 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:52:45 compute-0 systemd[1]: Started libpod-conmon-9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3.scope.
Nov 29 05:52:45 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:52:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c893624f1c2b108e4c8226e874a94ed08e58679eef8a92c4813cb5f8247d44f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c893624f1c2b108e4c8226e874a94ed08e58679eef8a92c4813cb5f8247d44f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c893624f1c2b108e4c8226e874a94ed08e58679eef8a92c4813cb5f8247d44f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c893624f1c2b108e4c8226e874a94ed08e58679eef8a92c4813cb5f8247d44f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:45 compute-0 podman[289284]: 2025-11-29 05:52:45.660010327 +0000 UTC m=+0.115172522 container init 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:52:45 compute-0 podman[289284]: 2025-11-29 05:52:45.565242434 +0000 UTC m=+0.020404619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:52:45 compute-0 podman[289284]: 2025-11-29 05:52:45.666853854 +0000 UTC m=+0.122016019 container start 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:52:45 compute-0 podman[289284]: 2025-11-29 05:52:45.669737475 +0000 UTC m=+0.124899670 container attach 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 05:52:45 compute-0 nova_compute[254898]: 2025-11-29 05:52:45.948 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:52:45 compute-0 nova_compute[254898]: 2025-11-29 05:52:45.973 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:52:46 compute-0 youthful_clarke[289300]: {
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:     "0": [
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:         {
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "devices": [
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "/dev/loop3"
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             ],
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_name": "ceph_lv0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_size": "21470642176",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "name": "ceph_lv0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "tags": {
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.cluster_name": "ceph",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.crush_device_class": "",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.encrypted": "0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.osd_id": "0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.type": "block",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.vdo": "0"
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             },
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "type": "block",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "vg_name": "ceph_vg0"
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:         }
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:     ],
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:     "1": [
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:         {
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "devices": [
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "/dev/loop4"
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             ],
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_name": "ceph_lv1",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_size": "21470642176",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "name": "ceph_lv1",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "tags": {
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.cluster_name": "ceph",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.crush_device_class": "",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.encrypted": "0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.osd_id": "1",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.type": "block",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.vdo": "0"
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             },
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "type": "block",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "vg_name": "ceph_vg1"
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:         }
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:     ],
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:     "2": [
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:         {
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "devices": [
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "/dev/loop5"
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             ],
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_name": "ceph_lv2",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_size": "21470642176",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "name": "ceph_lv2",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "tags": {
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.cluster_name": "ceph",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.crush_device_class": "",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.encrypted": "0",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.osd_id": "2",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.type": "block",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:                 "ceph.vdo": "0"
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             },
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "type": "block",
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:             "vg_name": "ceph_vg2"
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:         }
Nov 29 05:52:46 compute-0 youthful_clarke[289300]:     ]
Nov 29 05:52:46 compute-0 youthful_clarke[289300]: }
Nov 29 05:52:46 compute-0 systemd[1]: libpod-9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3.scope: Deactivated successfully.
Nov 29 05:52:46 compute-0 podman[289284]: 2025-11-29 05:52:46.382874681 +0000 UTC m=+0.838036846 container died 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 05:52:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c893624f1c2b108e4c8226e874a94ed08e58679eef8a92c4813cb5f8247d44f-merged.mount: Deactivated successfully.
Nov 29 05:52:46 compute-0 podman[289284]: 2025-11-29 05:52:46.429291464 +0000 UTC m=+0.884453629 container remove 9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:52:46 compute-0 systemd[1]: libpod-conmon-9d90894ce01cc9a98da82b103acb4a12adb7abf493511d6d2e1dee0ea2a520d3.scope: Deactivated successfully.
Nov 29 05:52:46 compute-0 ceph-mon[75176]: pgmap v1482: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:46 compute-0 sudo[289175]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:46 compute-0 sudo[289319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:52:46 compute-0 sudo[289319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:46 compute-0 sudo[289319]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:46 compute-0 sudo[289344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:52:46 compute-0 sudo[289344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:46 compute-0 sudo[289344]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:46 compute-0 sudo[289369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:52:46 compute-0 sudo[289369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:46 compute-0 sudo[289369]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:46 compute-0 sudo[289394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:52:46 compute-0 sudo[289394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:46 compute-0 nova_compute[254898]: 2025-11-29 05:52:46.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:52:47 compute-0 podman[289459]: 2025-11-29 05:52:47.067375739 +0000 UTC m=+0.039151337 container create 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:52:47 compute-0 systemd[1]: Started libpod-conmon-50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80.scope.
Nov 29 05:52:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:52:47 compute-0 podman[289459]: 2025-11-29 05:52:47.137780067 +0000 UTC m=+0.109555685 container init 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 05:52:47 compute-0 podman[289459]: 2025-11-29 05:52:47.048746384 +0000 UTC m=+0.020522002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:52:47 compute-0 podman[289459]: 2025-11-29 05:52:47.146442199 +0000 UTC m=+0.118217797 container start 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 05:52:47 compute-0 podman[289459]: 2025-11-29 05:52:47.149856102 +0000 UTC m=+0.121631700 container attach 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:52:47 compute-0 youthful_albattani[289476]: 167 167
Nov 29 05:52:47 compute-0 systemd[1]: libpod-50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80.scope: Deactivated successfully.
Nov 29 05:52:47 compute-0 podman[289459]: 2025-11-29 05:52:47.15342623 +0000 UTC m=+0.125201828 container died 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:52:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe975b8de2252f003cc86233d436b9fffdb885ed52a9b85988e4b8089f3bfbb5-merged.mount: Deactivated successfully.
Nov 29 05:52:47 compute-0 podman[289459]: 2025-11-29 05:52:47.192714179 +0000 UTC m=+0.164489777 container remove 50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:52:47 compute-0 systemd[1]: libpod-conmon-50e9cbbc0fd99f58046d99a3b3f0d4fbf695da9775f2d09d543e21671ca72b80.scope: Deactivated successfully.
Nov 29 05:52:47 compute-0 podman[289473]: 2025-11-29 05:52:47.21903239 +0000 UTC m=+0.103824925 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 05:52:47 compute-0 podman[289517]: 2025-11-29 05:52:47.361863117 +0000 UTC m=+0.039166927 container create 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:52:47 compute-0 systemd[1]: Started libpod-conmon-045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c.scope.
Nov 29 05:52:47 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490447a9d740c6dba423968f5b14b3deca570a81a39a1a4976aceb812971d91b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490447a9d740c6dba423968f5b14b3deca570a81a39a1a4976aceb812971d91b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490447a9d740c6dba423968f5b14b3deca570a81a39a1a4976aceb812971d91b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490447a9d740c6dba423968f5b14b3deca570a81a39a1a4976aceb812971d91b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:52:47 compute-0 podman[289517]: 2025-11-29 05:52:47.343181811 +0000 UTC m=+0.020485641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:52:47 compute-0 podman[289517]: 2025-11-29 05:52:47.440527597 +0000 UTC m=+0.117831427 container init 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:52:47 compute-0 podman[289517]: 2025-11-29 05:52:47.450316206 +0000 UTC m=+0.127620016 container start 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 05:52:47 compute-0 podman[289517]: 2025-11-29 05:52:47.453511714 +0000 UTC m=+0.130815524 container attach 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 05:52:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:48 compute-0 confident_leakey[289533]: {
Nov 29 05:52:48 compute-0 confident_leakey[289533]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "osd_id": 0,
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "type": "bluestore"
Nov 29 05:52:48 compute-0 confident_leakey[289533]:     },
Nov 29 05:52:48 compute-0 confident_leakey[289533]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "osd_id": 1,
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "type": "bluestore"
Nov 29 05:52:48 compute-0 confident_leakey[289533]:     },
Nov 29 05:52:48 compute-0 confident_leakey[289533]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "osd_id": 2,
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:52:48 compute-0 confident_leakey[289533]:         "type": "bluestore"
Nov 29 05:52:48 compute-0 confident_leakey[289533]:     }
Nov 29 05:52:48 compute-0 confident_leakey[289533]: }
Nov 29 05:52:48 compute-0 systemd[1]: libpod-045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c.scope: Deactivated successfully.
Nov 29 05:52:48 compute-0 conmon[289533]: conmon 045c93423505625f7222 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c.scope/container/memory.events
Nov 29 05:52:48 compute-0 podman[289517]: 2025-11-29 05:52:48.325897618 +0000 UTC m=+1.003201428 container died 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:52:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-490447a9d740c6dba423968f5b14b3deca570a81a39a1a4976aceb812971d91b-merged.mount: Deactivated successfully.
Nov 29 05:52:48 compute-0 podman[289517]: 2025-11-29 05:52:48.375941719 +0000 UTC m=+1.053245529 container remove 045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:52:48 compute-0 systemd[1]: libpod-conmon-045c93423505625f722244e127d164339a6d52457f302526ec81135ce2ea9c1c.scope: Deactivated successfully.
Nov 29 05:52:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:48 compute-0 sudo[289394]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:52:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:52:48 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:52:48 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:52:48 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 98281bd0-4b5f-448d-840f-09ff284418c4 does not exist
Nov 29 05:52:48 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev a3df4935-a65e-4906-b0c7-76080c150636 does not exist
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.446471) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568446508, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1074, "num_deletes": 251, "total_data_size": 1589011, "memory_usage": 1607464, "flush_reason": "Manual Compaction"}
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568457117, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1552202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32261, "largest_seqno": 33334, "table_properties": {"data_size": 1546944, "index_size": 2718, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11139, "raw_average_key_size": 19, "raw_value_size": 1536504, "raw_average_value_size": 2714, "num_data_blocks": 123, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764395468, "oldest_key_time": 1764395468, "file_creation_time": 1764395568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 10673 microseconds, and 3634 cpu microseconds.
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.457148) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1552202 bytes OK
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.457382) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.458642) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.458656) EVENT_LOG_v1 {"time_micros": 1764395568458651, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.458673) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1584002, prev total WAL file size 1586999, number of live WAL files 2.
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.459395) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1515KB)], [68(8719KB)]
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568459424, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10481360, "oldest_snapshot_seqno": -1}
Nov 29 05:52:48 compute-0 ceph-mon[75176]: pgmap v1483: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:52:48 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:52:48 compute-0 sudo[289578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:52:48 compute-0 sudo[289578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:48 compute-0 sudo[289578]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6278 keys, 8760606 bytes, temperature: kUnknown
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568504422, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8760606, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8719509, "index_size": 24283, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 158296, "raw_average_key_size": 25, "raw_value_size": 8607893, "raw_average_value_size": 1371, "num_data_blocks": 988, "num_entries": 6278, "num_filter_entries": 6278, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764395568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.504727) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8760606 bytes
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.505776) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 232.0 rd, 193.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.5 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(12.4) write-amplify(5.6) OK, records in: 6792, records dropped: 514 output_compression: NoCompression
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.505791) EVENT_LOG_v1 {"time_micros": 1764395568505783, "job": 38, "event": "compaction_finished", "compaction_time_micros": 45175, "compaction_time_cpu_micros": 20423, "output_level": 6, "num_output_files": 1, "total_output_size": 8760606, "num_input_records": 6792, "num_output_records": 6278, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568506077, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395568507499, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.459343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.507688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.507695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.507697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.507699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:52:48 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:52:48.507701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:52:48 compute-0 sudo[289603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:52:48 compute-0 sudo[289603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:52:48 compute-0 sudo[289603]: pam_unix(sudo:session): session closed for user root
Nov 29 05:52:48 compute-0 nova_compute[254898]: 2025-11-29 05:52:48.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:52:49 compute-0 nova_compute[254898]: 2025-11-29 05:52:49.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:52:49 compute-0 nova_compute[254898]: 2025-11-29 05:52:49.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:52:49 compute-0 nova_compute[254898]: 2025-11-29 05:52:49.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:52:49 compute-0 nova_compute[254898]: 2025-11-29 05:52:49.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:52:49 compute-0 nova_compute[254898]: 2025-11-29 05:52:49.971 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:52:49 compute-0 nova_compute[254898]: 2025-11-29 05:52:49.971 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:52:49 compute-0 nova_compute[254898]: 2025-11-29 05:52:49.972 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:52:49 compute-0 nova_compute[254898]: 2025-11-29 05:52:49.972 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:52:49 compute-0 nova_compute[254898]: 2025-11-29 05:52:49.972 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:52:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:50 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:52:50 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4273346653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:52:50 compute-0 nova_compute[254898]: 2025-11-29 05:52:50.423 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:52:50 compute-0 ceph-mon[75176]: pgmap v1484: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:50 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/4273346653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:52:50 compute-0 sshd-session[289628]: Invalid user testftp from 45.249.245.22 port 40908
Nov 29 05:52:50 compute-0 nova_compute[254898]: 2025-11-29 05:52:50.585 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:52:50 compute-0 nova_compute[254898]: 2025-11-29 05:52:50.586 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4939MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:52:50 compute-0 nova_compute[254898]: 2025-11-29 05:52:50.587 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:52:50 compute-0 nova_compute[254898]: 2025-11-29 05:52:50.587 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:52:50 compute-0 podman[289652]: 2025-11-29 05:52:50.602967047 +0000 UTC m=+0.088541612 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 05:52:50 compute-0 nova_compute[254898]: 2025-11-29 05:52:50.642 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:52:50 compute-0 nova_compute[254898]: 2025-11-29 05:52:50.642 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:52:50 compute-0 nova_compute[254898]: 2025-11-29 05:52:50.669 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:52:50 compute-0 sshd-session[289628]: Received disconnect from 45.249.245.22 port 40908:11: Bye Bye [preauth]
Nov 29 05:52:50 compute-0 sshd-session[289628]: Disconnected from invalid user testftp 45.249.245.22 port 40908 [preauth]
Nov 29 05:52:51 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:52:51 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3205061130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:52:51 compute-0 nova_compute[254898]: 2025-11-29 05:52:51.064 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:52:51 compute-0 nova_compute[254898]: 2025-11-29 05:52:51.069 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:52:51 compute-0 nova_compute[254898]: 2025-11-29 05:52:51.091 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:52:51 compute-0 nova_compute[254898]: 2025-11-29 05:52:51.093 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:52:51 compute-0 nova_compute[254898]: 2025-11-29 05:52:51.093 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:52:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:52:51 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3205061130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:52:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:52 compute-0 ceph-mon[75176]: pgmap v1485: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:53 compute-0 nova_compute[254898]: 2025-11-29 05:52:53.094 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:52:53 compute-0 nova_compute[254898]: 2025-11-29 05:52:53.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:52:53 compute-0 nova_compute[254898]: 2025-11-29 05:52:53.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:52:53 compute-0 nova_compute[254898]: 2025-11-29 05:52:53.954 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:52:53 compute-0 nova_compute[254898]: 2025-11-29 05:52:53.966 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:52:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:54 compute-0 ceph-mon[75176]: pgmap v1486: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:54 compute-0 nova_compute[254898]: 2025-11-29 05:52:54.961 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:52:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:56 compute-0 ceph-mon[75176]: pgmap v1487: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:52:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:52:58 compute-0 ceph-mon[75176]: pgmap v1488: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:00 compute-0 ceph-mon[75176]: pgmap v1489: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:00 compute-0 podman[289701]: 2025-11-29 05:53:00.98919349 +0000 UTC m=+0.046647619 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 05:53:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:02 compute-0 ceph-mon[75176]: pgmap v1490: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:03 compute-0 sshd-session[289721]: Received disconnect from 192.161.60.110 port 59514:11: Bye Bye [preauth]
Nov 29 05:53:03 compute-0 sshd-session[289721]: Disconnected from authenticating user root 192.161.60.110 port 59514 [preauth]
Nov 29 05:53:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:04 compute-0 ceph-mon[75176]: pgmap v1491: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:06 compute-0 ceph-mon[75176]: pgmap v1492: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:08 compute-0 ceph-mon[75176]: pgmap v1493: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:10 compute-0 ceph-mon[75176]: pgmap v1494: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:53:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:53:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:53:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:53:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:53:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:53:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:12 compute-0 ceph-mon[75176]: pgmap v1495: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:13 compute-0 sshd-session[289723]: Invalid user sopuser from 154.221.27.234 port 33822
Nov 29 05:53:13 compute-0 sshd-session[289723]: Received disconnect from 154.221.27.234 port 33822:11: Bye Bye [preauth]
Nov 29 05:53:13 compute-0 sshd-session[289723]: Disconnected from invalid user sopuser 154.221.27.234 port 33822 [preauth]
Nov 29 05:53:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:53:13.768 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:53:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:53:13.769 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:53:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:53:13.769 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:53:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:53:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/198869822' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:53:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:53:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/198869822' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:53:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/198869822' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:53:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/198869822' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:53:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:15 compute-0 ceph-mon[75176]: pgmap v1496: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:16 compute-0 ceph-mon[75176]: pgmap v1497: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:18 compute-0 podman[289726]: 2025-11-29 05:53:18.003996973 +0000 UTC m=+0.056897250 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 05:53:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:18 compute-0 ceph-mon[75176]: pgmap v1498: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:19 compute-0 sshd-session[289747]: Invalid user dolphinscheduler from 152.32.145.111 port 60508
Nov 29 05:53:19 compute-0 sshd-session[289747]: Received disconnect from 152.32.145.111 port 60508:11: Bye Bye [preauth]
Nov 29 05:53:19 compute-0 sshd-session[289747]: Disconnected from invalid user dolphinscheduler 152.32.145.111 port 60508 [preauth]
Nov 29 05:53:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:20 compute-0 ceph-mon[75176]: pgmap v1499: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:21 compute-0 podman[289749]: 2025-11-29 05:53:21.016951034 +0000 UTC m=+0.073127855 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 05:53:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:22 compute-0 ceph-mon[75176]: pgmap v1500: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:24 compute-0 ceph-mon[75176]: pgmap v1501: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:26 compute-0 ceph-mon[75176]: pgmap v1502: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:27 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:28 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:29 compute-0 ceph-mon[75176]: pgmap v1503: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:30 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:30 compute-0 ceph-mon[75176]: pgmap v1504: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:31 compute-0 podman[289776]: 2025-11-29 05:53:31.984903964 +0000 UTC m=+0.045383389 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 05:53:32 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:32 compute-0 ceph-mon[75176]: pgmap v1505: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:32 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:34 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:34 compute-0 ceph-mon[75176]: pgmap v1506: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:36 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:36 compute-0 ceph-mon[75176]: pgmap v1507: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:37 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:38 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:38 compute-0 ceph-mon[75176]: pgmap v1508: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:38 compute-0 sshd[190545]: Timeout before authentication for connection from 45.78.219.216 to 38.102.83.17, pid = 288231
Nov 29 05:53:40 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:40 compute-0 sshd-session[289795]: Invalid user exx from 45.120.216.232 port 49314
Nov 29 05:53:40 compute-0 ceph-mon[75176]: pgmap v1509: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:40 compute-0 sshd-session[289795]: Received disconnect from 45.120.216.232 port 49314:11: Bye Bye [preauth]
Nov 29 05:53:40 compute-0 sshd-session[289795]: Disconnected from invalid user exx 45.120.216.232 port 49314 [preauth]
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Optimize plan auto_2025-11-29_05:53:41
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [balancer INFO root] do_upmap
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'volumes']
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [balancer INFO root] prepared 0/10 changes
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:53:41 compute-0 ceph-mgr[75473]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 05:53:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:53:42 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:53:42 compute-0 ceph-mgr[75473]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1460327761
Nov 29 05:53:42 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:42 compute-0 ceph-mon[75176]: pgmap v1510: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:42 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:44 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:44 compute-0 ceph-mon[75176]: pgmap v1511: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:46 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:46 compute-0 ceph-mon[75176]: pgmap v1512: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:47 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:47 compute-0 nova_compute[254898]: 2025-11-29 05:53:47.952 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:47 compute-0 nova_compute[254898]: 2025-11-29 05:53:47.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:47 compute-0 nova_compute[254898]: 2025-11-29 05:53:47.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:48 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:48 compute-0 ceph-mon[75176]: pgmap v1513: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:48 compute-0 sudo[289798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:53:48 compute-0 sudo[289798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:48 compute-0 sudo[289798]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:48 compute-0 sudo[289829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:53:48 compute-0 sudo[289829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:48 compute-0 sudo[289829]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:48 compute-0 podman[289822]: 2025-11-29 05:53:48.735561871 +0000 UTC m=+0.083686113 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:53:48 compute-0 sudo[289868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:53:48 compute-0 sudo[289868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:48 compute-0 sudo[289868]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:48 compute-0 sudo[289893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 29 05:53:48 compute-0 sudo[289893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:49 compute-0 sudo[289893]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:53:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:53:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 05:53:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:53:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 05:53:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:53:49 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 473a5d92-cb91-4786-a7c6-8aa12943fe03 does not exist
Nov 29 05:53:49 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev db3bc9fb-71c3-4651-b50b-de80c26aaa08 does not exist
Nov 29 05:53:49 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 82bc9350-2034-4c04-a781-8fd76e6cf7d4 does not exist
Nov 29 05:53:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 05:53:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:53:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 05:53:49 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:53:49 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:53:49 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:53:49 compute-0 sudo[289948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:53:49 compute-0 sudo[289948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:49 compute-0 sudo[289948]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:49 compute-0 sudo[289973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:53:49 compute-0 sudo[289973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:49 compute-0 sudo[289973]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:49 compute-0 sudo[289998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:53:49 compute-0 sudo[289998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:49 compute-0 sudo[289998]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:53:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 05:53:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:53:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 05:53:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 05:53:49 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:53:49 compute-0 sudo[290023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 29 05:53:49 compute-0 sudo[290023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:49 compute-0 podman[290089]: 2025-11-29 05:53:49.930319243 +0000 UTC m=+0.059938344 container create 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:53:49 compute-0 nova_compute[254898]: 2025-11-29 05:53:49.967 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:49 compute-0 systemd[1]: Started libpod-conmon-00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7.scope.
Nov 29 05:53:49 compute-0 podman[290089]: 2025-11-29 05:53:49.897362339 +0000 UTC m=+0.026981450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:53:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:53:50 compute-0 podman[290089]: 2025-11-29 05:53:50.039804666 +0000 UTC m=+0.169423767 container init 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 05:53:50 compute-0 podman[290089]: 2025-11-29 05:53:50.050585969 +0000 UTC m=+0.180205040 container start 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:53:50 compute-0 podman[290089]: 2025-11-29 05:53:50.054424412 +0000 UTC m=+0.184043493 container attach 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 05:53:50 compute-0 thirsty_borg[290105]: 167 167
Nov 29 05:53:50 compute-0 systemd[1]: libpod-00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7.scope: Deactivated successfully.
Nov 29 05:53:50 compute-0 podman[290089]: 2025-11-29 05:53:50.060159552 +0000 UTC m=+0.189778623 container died 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 05:53:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-962c672f8cd1d8baf29e6e3ad347039333ca052dff3aead58233bbad138a298b-merged.mount: Deactivated successfully.
Nov 29 05:53:50 compute-0 podman[290089]: 2025-11-29 05:53:50.103866279 +0000 UTC m=+0.233485350 container remove 00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 05:53:50 compute-0 systemd[1]: libpod-conmon-00f11a105bcbfa6770faaf0602b2cd0a1a6b4e2de933d10147c101229305fef7.scope: Deactivated successfully.
Nov 29 05:53:50 compute-0 podman[290130]: 2025-11-29 05:53:50.297519186 +0000 UTC m=+0.040035229 container create 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 05:53:50 compute-0 systemd[1]: Started libpod-conmon-5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295.scope.
Nov 29 05:53:50 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:50 compute-0 podman[290130]: 2025-11-29 05:53:50.282708554 +0000 UTC m=+0.025224617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:53:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:50 compute-0 podman[290130]: 2025-11-29 05:53:50.386821316 +0000 UTC m=+0.129337369 container init 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:53:50 compute-0 podman[290130]: 2025-11-29 05:53:50.398416599 +0000 UTC m=+0.140932632 container start 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:53:50 compute-0 podman[290130]: 2025-11-29 05:53:50.40176039 +0000 UTC m=+0.144276433 container attach 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 05:53:50 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:50 compute-0 ceph-mon[75176]: pgmap v1514: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:50 compute-0 nova_compute[254898]: 2025-11-29 05:53:50.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:50 compute-0 nova_compute[254898]: 2025-11-29 05:53:50.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 29 05:53:51 compute-0 confident_kowalevski[290147]: --> passed data devices: 0 physical, 3 LVM
Nov 29 05:53:51 compute-0 confident_kowalevski[290147]: --> relative data size: 1.0
Nov 29 05:53:51 compute-0 confident_kowalevski[290147]: --> All data devices are unavailable
Nov 29 05:53:51 compute-0 systemd[1]: libpod-5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295.scope: Deactivated successfully.
Nov 29 05:53:51 compute-0 podman[290130]: 2025-11-29 05:53:51.480801418 +0000 UTC m=+1.223317491 container died 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:53:51 compute-0 systemd[1]: libpod-5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295.scope: Consumed 1.038s CPU time.
Nov 29 05:53:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b994ee34d94a9c23eb4cd716d66ebe05bf446de5b57ad1b7b1c6d658c55c1e7-merged.mount: Deactivated successfully.
Nov 29 05:53:51 compute-0 podman[290130]: 2025-11-29 05:53:51.547073035 +0000 UTC m=+1.289589088 container remove 5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:53:51 compute-0 systemd[1]: libpod-conmon-5757457ac5eef04f2d14c6afa5fe0763388e6a5a5b506e845cbe69b57c015295.scope: Deactivated successfully.
Nov 29 05:53:51 compute-0 sudo[290023]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:51 compute-0 sudo[290207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:53:51 compute-0 podman[290177]: 2025-11-29 05:53:51.667491794 +0000 UTC m=+0.149534510 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 05:53:51 compute-0 sudo[290207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:51 compute-0 sudo[290207]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:53:51 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:53:51 compute-0 sudo[290239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:53:51 compute-0 sudo[290239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:51 compute-0 sudo[290239]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:51 compute-0 sudo[290264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:53:51 compute-0 sudo[290264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:51 compute-0 sudo[290264]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:51 compute-0 sudo[290289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- lvm list --format json
Nov 29 05:53:51 compute-0 sudo[290289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:51 compute-0 nova_compute[254898]: 2025-11-29 05:53:51.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:51 compute-0 nova_compute[254898]: 2025-11-29 05:53:51.954 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:51 compute-0 nova_compute[254898]: 2025-11-29 05:53:51.955 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:51 compute-0 nova_compute[254898]: 2025-11-29 05:53:51.987 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:53:51 compute-0 nova_compute[254898]: 2025-11-29 05:53:51.988 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:53:51 compute-0 nova_compute[254898]: 2025-11-29 05:53:51.988 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:53:51 compute-0 nova_compute[254898]: 2025-11-29 05:53:51.989 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 29 05:53:51 compute-0 nova_compute[254898]: 2025-11-29 05:53:51.989 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:53:52 compute-0 podman[290374]: 2025-11-29 05:53:52.230366263 +0000 UTC m=+0.059152034 container create b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 05:53:52 compute-0 systemd[1]: Started libpod-conmon-b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87.scope.
Nov 29 05:53:52 compute-0 podman[290374]: 2025-11-29 05:53:52.203366185 +0000 UTC m=+0.032152016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:53:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:53:52 compute-0 podman[290374]: 2025-11-29 05:53:52.31092134 +0000 UTC m=+0.139707131 container init b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 05:53:52 compute-0 podman[290374]: 2025-11-29 05:53:52.323961148 +0000 UTC m=+0.152746899 container start b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:53:52 compute-0 podman[290374]: 2025-11-29 05:53:52.328000017 +0000 UTC m=+0.156785818 container attach b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 05:53:52 compute-0 vigorous_maxwell[290390]: 167 167
Nov 29 05:53:52 compute-0 systemd[1]: libpod-b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87.scope: Deactivated successfully.
Nov 29 05:53:52 compute-0 podman[290374]: 2025-11-29 05:53:52.334511206 +0000 UTC m=+0.163296997 container died b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:53:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a191d008ef9464b834bec9b4da66ad38c21b6bfca9f7b1508d270484a707bde9-merged.mount: Deactivated successfully.
Nov 29 05:53:52 compute-0 podman[290374]: 2025-11-29 05:53:52.376729656 +0000 UTC m=+0.205515447 container remove b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:53:52 compute-0 systemd[1]: libpod-conmon-b605d88441dd4352108cd644f1b2abd0af599805ab29eabc15996f88bff27d87.scope: Deactivated successfully.
Nov 29 05:53:52 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:53:52 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272420030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:53:52 compute-0 ceph-mon[75176]: pgmap v1515: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:52 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1272420030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:53:52 compute-0 nova_compute[254898]: 2025-11-29 05:53:52.487 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:53:52 compute-0 podman[290415]: 2025-11-29 05:53:52.573304774 +0000 UTC m=+0.059209816 container create 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 05:53:52 compute-0 podman[290415]: 2025-11-29 05:53:52.549349409 +0000 UTC m=+0.035254461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:53:52 compute-0 systemd[1]: Started libpod-conmon-248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262.scope.
Nov 29 05:53:52 compute-0 nova_compute[254898]: 2025-11-29 05:53:52.693 254902 WARNING nova.virt.libvirt.driver [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 29 05:53:52 compute-0 nova_compute[254898]: 2025-11-29 05:53:52.698 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4939MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 29 05:53:52 compute-0 nova_compute[254898]: 2025-11-29 05:53:52.698 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:53:52 compute-0 nova_compute[254898]: 2025-11-29 05:53:52.699 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:53:52 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a26e906f3d1ebdd076265aca34305673f06ffd9afc82b9a4ad5999b0b58400b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a26e906f3d1ebdd076265aca34305673f06ffd9afc82b9a4ad5999b0b58400b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a26e906f3d1ebdd076265aca34305673f06ffd9afc82b9a4ad5999b0b58400b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a26e906f3d1ebdd076265aca34305673f06ffd9afc82b9a4ad5999b0b58400b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:52 compute-0 podman[290415]: 2025-11-29 05:53:52.753633566 +0000 UTC m=+0.239538668 container init 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:53:52 compute-0 podman[290415]: 2025-11-29 05:53:52.761944439 +0000 UTC m=+0.247849501 container start 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Nov 29 05:53:52 compute-0 podman[290415]: 2025-11-29 05:53:52.765575418 +0000 UTC m=+0.251480490 container attach 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 05:53:52 compute-0 nova_compute[254898]: 2025-11-29 05:53:52.870 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 29 05:53:52 compute-0 nova_compute[254898]: 2025-11-29 05:53:52.871 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 29 05:53:52 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:53 compute-0 nova_compute[254898]: 2025-11-29 05:53:53.014 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing inventories for resource provider 59594bc8-0143-475b-913f-cbe106b48966 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 29 05:53:53 compute-0 nova_compute[254898]: 2025-11-29 05:53:53.130 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating ProviderTree inventory for provider 59594bc8-0143-475b-913f-cbe106b48966 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 29 05:53:53 compute-0 nova_compute[254898]: 2025-11-29 05:53:53.130 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Updating inventory in ProviderTree for provider 59594bc8-0143-475b-913f-cbe106b48966 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 29 05:53:53 compute-0 nova_compute[254898]: 2025-11-29 05:53:53.145 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing aggregate associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 29 05:53:53 compute-0 nova_compute[254898]: 2025-11-29 05:53:53.171 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Refreshing trait associations for resource provider 59594bc8-0143-475b-913f-cbe106b48966, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_SATA,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,HW_CPU_X86_CLMUL,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_BMI2,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_F16C,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 29 05:53:53 compute-0 nova_compute[254898]: 2025-11-29 05:53:53.191 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 29 05:53:53 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 05:53:53 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174227071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]: {
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:     "0": [
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:         {
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "devices": [
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "/dev/loop3"
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             ],
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_name": "ceph_lv0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_size": "21470642176",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3cc3f442-c807-4e2a-868e-a4aae87af231,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "name": "ceph_lv0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "tags": {
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.block_uuid": "ZKtckO-uFPF-8Xu2-hewk-LVO3-tHdy-ctHV6O",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.cluster_name": "ceph",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.crush_device_class": "",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.encrypted": "0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.osd_fsid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.osd_id": "0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.type": "block",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.vdo": "0"
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             },
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "type": "block",
Nov 29 05:53:53 compute-0 nova_compute[254898]: 2025-11-29 05:53:53.603 254902 DEBUG oslo_concurrency.processutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "vg_name": "ceph_vg0"
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:         }
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:     ],
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:     "1": [
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:         {
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "devices": [
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "/dev/loop4"
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             ],
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_name": "ceph_lv1",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_size": "21470642176",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9801566-0c31-4202-a669-811037218c27,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "name": "ceph_lv1",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "tags": {
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.block_uuid": "XbYTr9-4qXz-aWRI-jnYU-1XsM-ilj3-sCR2aE",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.cluster_name": "ceph",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.crush_device_class": "",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.encrypted": "0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.osd_fsid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.osd_id": "1",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.type": "block",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.vdo": "0"
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             },
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "type": "block",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "vg_name": "ceph_vg1"
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:         }
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:     ],
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:     "2": [
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:         {
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "devices": [
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "/dev/loop5"
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             ],
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_name": "ceph_lv2",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_size": "21470642176",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=93f82912-647c-5e78-b081-707d0a2966d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=eec69945-b157-41e1-8fba-3992c2dca958,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "lv_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "name": "ceph_lv2",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "tags": {
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.block_uuid": "8UHNvF-ppfL-FBQg-KNlL-XKc7-K1vK-7WS829",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.cephx_lockbox_secret": "",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.cluster_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.cluster_name": "ceph",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.crush_device_class": "",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.encrypted": "0",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.osd_fsid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.osd_id": "2",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.type": "block",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:                 "ceph.vdo": "0"
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             },
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "type": "block",
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:             "vg_name": "ceph_vg2"
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:         }
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]:     ]
Nov 29 05:53:53 compute-0 compassionate_shockley[290432]: }
Nov 29 05:53:53 compute-0 nova_compute[254898]: 2025-11-29 05:53:53.612 254902 DEBUG nova.compute.provider_tree [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed in ProviderTree for provider: 59594bc8-0143-475b-913f-cbe106b48966 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 29 05:53:53 compute-0 systemd[1]: libpod-248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262.scope: Deactivated successfully.
Nov 29 05:53:53 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/174227071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 05:53:53 compute-0 nova_compute[254898]: 2025-11-29 05:53:53.662 254902 DEBUG nova.scheduler.client.report [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Inventory has not changed for provider 59594bc8-0143-475b-913f-cbe106b48966 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 29 05:53:53 compute-0 nova_compute[254898]: 2025-11-29 05:53:53.664 254902 DEBUG nova.compute.resource_tracker [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 29 05:53:53 compute-0 nova_compute[254898]: 2025-11-29 05:53:53.664 254902 DEBUG oslo_concurrency.lockutils [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:53:53 compute-0 podman[290463]: 2025-11-29 05:53:53.686535526 +0000 UTC m=+0.033093348 container died 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 05:53:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a26e906f3d1ebdd076265aca34305673f06ffd9afc82b9a4ad5999b0b58400b-merged.mount: Deactivated successfully.
Nov 29 05:53:53 compute-0 podman[290463]: 2025-11-29 05:53:53.736526777 +0000 UTC m=+0.083084579 container remove 248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_shockley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:53:53 compute-0 systemd[1]: libpod-conmon-248c67d2b22fa43314ab59d1ae368c3c2ce99ed905a49895f789ca5997da1262.scope: Deactivated successfully.
Nov 29 05:53:53 compute-0 sudo[290289]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:53 compute-0 sudo[290478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:53:53 compute-0 sudo[290478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:53 compute-0 sudo[290478]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:53 compute-0 sudo[290503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 29 05:53:53 compute-0 sudo[290503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:53 compute-0 sudo[290503]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:54 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:54 compute-0 sudo[290528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:53:54 compute-0 sudo[290528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:54 compute-0 sudo[290528]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:54 compute-0 sudo[290553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/93f82912-647c-5e78-b081-707d0a2966d8/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 93f82912-647c-5e78-b081-707d0a2966d8 -- raw list --format json
Nov 29 05:53:54 compute-0 sudo[290553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:54 compute-0 ceph-mon[75176]: pgmap v1516: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:54 compute-0 podman[290617]: 2025-11-29 05:53:54.943368574 +0000 UTC m=+0.043975584 container create 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 05:53:54 compute-0 nova_compute[254898]: 2025-11-29 05:53:54.953 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:54 compute-0 nova_compute[254898]: 2025-11-29 05:53:54.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 29 05:53:54 compute-0 nova_compute[254898]: 2025-11-29 05:53:54.953 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 29 05:53:54 compute-0 systemd[1]: Started libpod-conmon-9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78.scope.
Nov 29 05:53:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:53:55 compute-0 podman[290617]: 2025-11-29 05:53:54.924565175 +0000 UTC m=+0.025172215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:53:55 compute-0 podman[290617]: 2025-11-29 05:53:55.025598481 +0000 UTC m=+0.126205501 container init 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 05:53:55 compute-0 podman[290617]: 2025-11-29 05:53:55.033492543 +0000 UTC m=+0.134099543 container start 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 05:53:55 compute-0 podman[290617]: 2025-11-29 05:53:55.037580143 +0000 UTC m=+0.138187193 container attach 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 05:53:55 compute-0 fervent_bassi[290634]: 167 167
Nov 29 05:53:55 compute-0 systemd[1]: libpod-9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78.scope: Deactivated successfully.
Nov 29 05:53:55 compute-0 podman[290617]: 2025-11-29 05:53:55.040427873 +0000 UTC m=+0.141034873 container died 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:53:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c29fe19f0c5dfa91351a7186d772315877c3969455e3531c820c3f2fdb2bbced-merged.mount: Deactivated successfully.
Nov 29 05:53:55 compute-0 podman[290617]: 2025-11-29 05:53:55.080123242 +0000 UTC m=+0.180730242 container remove 9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bassi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:53:55 compute-0 systemd[1]: libpod-conmon-9533c9457240711130fb473d8d18ba7f94b814097b1bc112a22cf29244ae3c78.scope: Deactivated successfully.
Nov 29 05:53:55 compute-0 podman[290658]: 2025-11-29 05:53:55.258715351 +0000 UTC m=+0.042817466 container create 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 05:53:55 compute-0 systemd[1]: Started libpod-conmon-8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0.scope.
Nov 29 05:53:55 compute-0 systemd[1]: Started libcrun container.
Nov 29 05:53:55 compute-0 podman[290658]: 2025-11-29 05:53:55.243331945 +0000 UTC m=+0.027434080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 05:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7949a41211b53f349bda3374b45ae9d0ad7c4f2865359140991f78bdeaf93ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7949a41211b53f349bda3374b45ae9d0ad7c4f2865359140991f78bdeaf93ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7949a41211b53f349bda3374b45ae9d0ad7c4f2865359140991f78bdeaf93ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7949a41211b53f349bda3374b45ae9d0ad7c4f2865359140991f78bdeaf93ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 05:53:55 compute-0 podman[290658]: 2025-11-29 05:53:55.355880893 +0000 UTC m=+0.139983008 container init 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 05:53:55 compute-0 podman[290658]: 2025-11-29 05:53:55.371366451 +0000 UTC m=+0.155468606 container start 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:53:55 compute-0 podman[290658]: 2025-11-29 05:53:55.375656495 +0000 UTC m=+0.159758630 container attach 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 05:53:56 compute-0 pensive_clarke[290674]: {
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:     "3cc3f442-c807-4e2a-868e-a4aae87af231": {
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "osd_id": 0,
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "osd_uuid": "3cc3f442-c807-4e2a-868e-a4aae87af231",
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "type": "bluestore"
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:     },
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:     "b9801566-0c31-4202-a669-811037218c27": {
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "osd_id": 1,
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "osd_uuid": "b9801566-0c31-4202-a669-811037218c27",
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "type": "bluestore"
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:     },
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:     "eec69945-b157-41e1-8fba-3992c2dca958": {
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "ceph_fsid": "93f82912-647c-5e78-b081-707d0a2966d8",
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "osd_id": 2,
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "osd_uuid": "eec69945-b157-41e1-8fba-3992c2dca958",
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:         "type": "bluestore"
Nov 29 05:53:56 compute-0 pensive_clarke[290674]:     }
Nov 29 05:53:56 compute-0 pensive_clarke[290674]: }
Nov 29 05:53:56 compute-0 systemd[1]: libpod-8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0.scope: Deactivated successfully.
Nov 29 05:53:56 compute-0 podman[290658]: 2025-11-29 05:53:56.351874883 +0000 UTC m=+1.135977008 container died 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 05:53:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7949a41211b53f349bda3374b45ae9d0ad7c4f2865359140991f78bdeaf93ee-merged.mount: Deactivated successfully.
Nov 29 05:53:56 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:56 compute-0 podman[290658]: 2025-11-29 05:53:56.418157281 +0000 UTC m=+1.202259396 container remove 8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_clarke, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 05:53:56 compute-0 systemd[1]: libpod-conmon-8963b16f68b2893cfa403d0d4220f83ac496429e58e26419c04e2f4321b77be0.scope: Deactivated successfully.
Nov 29 05:53:56 compute-0 sudo[290553]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:56 compute-0 sshd-session[290708]: Accepted publickey for zuul from 192.168.122.10 port 48726 ssh2: ECDSA SHA256:o4cki2u41uIhjw3W3yvMuKQmE6j58gf9lg0GEBWyQAU
Nov 29 05:53:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 05:53:56 compute-0 systemd-logind[793]: New session 54 of user zuul.
Nov 29 05:53:56 compute-0 systemd[1]: Started Session 54 of User zuul.
Nov 29 05:53:56 compute-0 sshd-session[290708]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 29 05:53:56 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:53:56 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 05:53:56 compute-0 ceph-mon[75176]: pgmap v1517: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:56 compute-0 ceph-mon[75176]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:53:56 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 4b37723a-64e0-43a6-a6a3-947e3866b4a9 does not exist
Nov 29 05:53:56 compute-0 ceph-mgr[75473]: [progress WARNING root] complete: ev 5c224873-7571-4f16-9922-29f768da0712 does not exist
Nov 29 05:53:56 compute-0 sudo[290724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 29 05:53:56 compute-0 sudo[290724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:56 compute-0 sudo[290724]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:56 compute-0 sudo[290741]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 29 05:53:56 compute-0 sudo[290741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 29 05:53:56 compute-0 sudo[290773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 29 05:53:56 compute-0 sudo[290773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 29 05:53:56 compute-0 sudo[290773]: pam_unix(sudo:session): session closed for user root
Nov 29 05:53:56 compute-0 nova_compute[254898]: 2025-11-29 05:53:56.763 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 29 05:53:56 compute-0 nova_compute[254898]: 2025-11-29 05:53:56.765 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:56 compute-0 nova_compute[254898]: 2025-11-29 05:53:56.765 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 29 05:53:57 compute-0 nova_compute[254898]: 2025-11-29 05:53:57.096 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 29 05:53:57 compute-0 nova_compute[254898]: 2025-11-29 05:53:57.096 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:57 compute-0 nova_compute[254898]: 2025-11-29 05:53:57.097 254902 DEBUG nova.compute.manager [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 29 05:53:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:53:57 compute-0 ceph-mon[75176]: from='mgr.14132 192.168.122.100:0/4132444143' entity='mgr.compute-0.csskcz' 
Nov 29 05:53:57 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:53:58 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:58 compute-0 ceph-mon[75176]: pgmap v1518: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:53:59 compute-0 nova_compute[254898]: 2025-11-29 05:53:59.159 254902 DEBUG oslo_service.periodic_task [None req-30a97862-a7d9-4a02-9f42-ec80fe45ce8a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 29 05:53:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14825 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:53:59 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14827 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:53:59 compute-0 ceph-mon[75176]: from='client.14825 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:00 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 05:54:00 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015181172' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 05:54:00 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:00 compute-0 ceph-mon[75176]: from='client.14827 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:00 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3015181172' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 05:54:00 compute-0 ceph-mon[75176]: pgmap v1519: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:02 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:02 compute-0 ceph-mon[75176]: pgmap v1520: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:02 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:54:02 compute-0 ovs-vsctl[291060]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 05:54:03 compute-0 podman[291057]: 2025-11-29 05:54:03.008136852 +0000 UTC m=+0.061081222 container health_status 5ecd2ea569dd9dd8f35245c6b935194b0016bbea8b195395bb729a2225531209 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 05:54:03 compute-0 sshd-session[291031]: Invalid user old from 45.249.245.22 port 40172
Nov 29 05:54:03 compute-0 sshd-session[291031]: Received disconnect from 45.249.245.22 port 40172:11: Bye Bye [preauth]
Nov 29 05:54:03 compute-0 sshd-session[291031]: Disconnected from invalid user old 45.249.245.22 port 40172 [preauth]
Nov 29 05:54:03 compute-0 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 05:54:03 compute-0 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 05:54:03 compute-0 virtqemud[254503]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 05:54:04 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:04 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: cache status {prefix=cache status} (starting...)
Nov 29 05:54:04 compute-0 ceph-mon[75176]: pgmap v1521: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:04 compute-0 lvm[291412]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 29 05:54:04 compute-0 lvm[291412]: VG ceph_vg1 finished
Nov 29 05:54:04 compute-0 lvm[291414]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 05:54:04 compute-0 lvm[291414]: VG ceph_vg0 finished
Nov 29 05:54:04 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: client ls {prefix=client ls} (starting...)
Nov 29 05:54:04 compute-0 lvm[291450]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 29 05:54:04 compute-0 lvm[291450]: VG ceph_vg2 finished
Nov 29 05:54:04 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14831 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:05 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 05:54:05 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 05:54:05 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14833 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:05 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 05:54:05 compute-0 ceph-mon[75176]: from='client.14831 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:05 compute-0 ceph-mon[75176]: from='client.14833 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:05 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 05:54:05 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 05:54:05 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 05:54:05 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786193635' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 05:54:05 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 05:54:06 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14839 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:06 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:54:06.086+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 05:54:06 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 05:54:06 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 05:54:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 05:54:06 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1415974482' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:54:06 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 05:54:06 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:06 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1786193635' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 05:54:06 compute-0 ceph-mon[75176]: from='client.14839 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:06 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1415974482' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 05:54:06 compute-0 ceph-mon[75176]: pgmap v1522: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 05:54:06 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940364020' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 05:54:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 05:54:06 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3635041196' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 05:54:06 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: ops {prefix=ops} (starting...)
Nov 29 05:54:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 05:54:06 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622458214' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 05:54:06 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 05:54:06 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995625949' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 05:54:07 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: session ls {prefix=session ls} (starting...)
Nov 29 05:54:07 compute-0 ceph-mds[101593]: mds.cephfs.compute-0.mjtuko asok_command: status {prefix=status} (starting...)
Nov 29 05:54:07 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14853 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 05:54:07 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2293629981' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 05:54:07 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3940364020' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 05:54:07 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3635041196' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 05:54:07 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1622458214' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 05:54:07 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2995625949' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 05:54:07 compute-0 ceph-mon[75176]: from='client.14853 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:07 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2293629981' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 05:54:07 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14855 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 05:54:07 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1525284977' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 05:54:07 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.918572) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647918625, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 910, "num_deletes": 255, "total_data_size": 1176215, "memory_usage": 1194128, "flush_reason": "Manual Compaction"}
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647927952, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1164650, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33335, "largest_seqno": 34244, "table_properties": {"data_size": 1160132, "index_size": 2106, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10000, "raw_average_key_size": 19, "raw_value_size": 1150960, "raw_average_value_size": 2221, "num_data_blocks": 94, "num_entries": 518, "num_filter_entries": 518, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764395568, "oldest_key_time": 1764395568, "file_creation_time": 1764395647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9452 microseconds, and 4387 cpu microseconds.
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.928021) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1164650 bytes OK
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.928051) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.929305) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.929321) EVENT_LOG_v1 {"time_micros": 1764395647929316, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.929346) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1171750, prev total WAL file size 1171750, number of live WAL files 2.
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.929853) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323533' seq:0, type:0; will stop at (end)
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1137KB)], [71(8555KB)]
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647929952, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9925256, "oldest_snapshot_seqno": -1}
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6274 keys, 9634245 bytes, temperature: kUnknown
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647989292, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9634245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9592327, "index_size": 25104, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 159186, "raw_average_key_size": 25, "raw_value_size": 9479874, "raw_average_value_size": 1510, "num_data_blocks": 1018, "num_entries": 6274, "num_filter_entries": 6274, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764392871, "oldest_key_time": 0, "file_creation_time": 1764395647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e7a482e8-4a7b-461a-a1cb-36d637653226", "db_session_id": "HDG9CTZH3D8UGVBA5ZVT", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.989509) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9634245 bytes
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.990332) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.1 rd, 162.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(16.8) write-amplify(8.3) OK, records in: 6796, records dropped: 522 output_compression: NoCompression
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.990347) EVENT_LOG_v1 {"time_micros": 1764395647990340, "job": 40, "event": "compaction_finished", "compaction_time_micros": 59406, "compaction_time_cpu_micros": 23942, "output_level": 6, "num_output_files": 1, "total_output_size": 9634245, "num_input_records": 6796, "num_output_records": 6274, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647990600, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764395647991810, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.929750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.991914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.991923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.991925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.991928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:54:07 compute-0 ceph-mon[75176]: rocksdb: (Original Log Time 2025/11/29-05:54:07.991930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 05:54:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 05:54:08 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701728340' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:54:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 05:54:08 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1067124720' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 05:54:08 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 05:54:08 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3424644693' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 05:54:08 compute-0 ceph-mon[75176]: from='client.14855 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:08 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1525284977' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 05:54:08 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2701728340' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:54:08 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1067124720' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 05:54:08 compute-0 ceph-mon[75176]: pgmap v1523: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:08 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3424644693' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 05:54:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 05:54:08 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181782022' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 05:54:08 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14869 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:08 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:54:08.891+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 05:54:08 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 05:54:08 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 05:54:08 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4137024356' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 05:54:09 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14873 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 05:54:09 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2266566497' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 05:54:09 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/181782022' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 05:54:09 compute-0 ceph-mon[75176]: from='client.14869 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:09 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/4137024356' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 05:54:09 compute-0 ceph-mon[75176]: from='client.14873 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:09 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2266566497' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 05:54:09 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14875 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:09 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 05:54:09 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197600433' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 05:54:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14879 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 05:54:10 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255992290' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 05:54:10 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14883 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:10 compute-0 ceph-mon[75176]: from='client.14875 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:10 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3197600433' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 05:54:10 compute-0 ceph-mon[75176]: from='client.14879 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:10 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1255992290' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 05:54:10 compute-0 ceph-mon[75176]: pgmap v1524: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:10 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 05:54:10 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012891697' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 05:54:10 compute-0 sshd-session[292359]: Invalid user under from 192.161.60.110 port 57904
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68231168 unmapped: 917504 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:23.018672+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:24.018807+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68239360 unmapped: 909312 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:25.018941+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:26.019083+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68247552 unmapped: 901120 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:27.019223+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:28.019317+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68255744 unmapped: 892928 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:29.019429+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:30.019568+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:31.019731+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68263936 unmapped: 884736 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:32.019930+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:33.020113+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68272128 unmapped: 876544 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:34.073894+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:35.074040+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:36.074185+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68280320 unmapped: 868352 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:37.074336+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:38.074516+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68288512 unmapped: 860160 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:39.074717+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:40.074853+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:41.075011+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68296704 unmapped: 851968 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:42.075204+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:43.075316+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68304896 unmapped: 843776 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:44.075488+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:45.075632+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:46.075764+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68313088 unmapped: 835584 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:47.075888+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:48.076028+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 827392 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:49.076166+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:50.076359+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68329472 unmapped: 819200 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:51.076556+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:52.076812+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:53.076979+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68337664 unmapped: 811008 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:54.077216+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:55.077391+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 802816 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:56.077578+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:57.077746+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68354048 unmapped: 794624 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:58.077892+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:59.078018+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:00.078243+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68362240 unmapped: 786432 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:01.078493+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:02.078731+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:03.078903+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68370432 unmapped: 778240 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:04.079066+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:05.079202+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68378624 unmapped: 770048 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:06.079343+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 761856 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:07.079468+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68386816 unmapped: 761856 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:08.079603+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:09.079764+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:10.079933+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 753664 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:11.080128+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68403200 unmapped: 745472 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:12.080354+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 737280 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:13.080554+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 737280 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:14.080698+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:15.080862+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68419584 unmapped: 729088 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:16.080975+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:17.081087+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:18.081309+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:19.081495+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:20.081627+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68427776 unmapped: 720896 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:21.081814+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:22.081989+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:23.082097+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:24.082235+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68444160 unmapped: 704512 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:25.082317+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:26.082465+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:27.082617+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:28.082747+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:29.082868+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:30.082978+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:31.083104+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:32.083295+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:33.083498+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:34.083653+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:35.083790+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:36.083930+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:37.084082+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:38.084211+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:39.084316+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:40.084485+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:41.084645+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:42.084860+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:43.085047+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:44.085218+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68452352 unmapped: 696320 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:45.085329+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:46.085463+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:47.085603+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:48.085802+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:49.085999+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68468736 unmapped: 679936 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:50.086228+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:51.086428+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:52.086639+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:53.086763+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:54.086894+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:55.087016+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:56.087133+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:57.087279+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:58.087413+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:59.087549+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:00.087680+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:01.087827+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:02.088034+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:03.088132+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:04.088256+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68476928 unmapped: 671744 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:05.088374+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:06.088523+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:07.088646+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:08.088778+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:09.088934+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68493312 unmapped: 655360 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:10.089064+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:11.089190+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68501504 unmapped: 647168 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:12.089350+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:13.089505+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:14.089658+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:15.089783+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:16.089940+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:17.090167+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:18.090306+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:19.090460+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:20.090617+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 638976 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:21.090767+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:22.090922+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:23.091070+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:24.091696+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:25.091858+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:26.091996+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:27.092168+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:28.092905+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:29.093345+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:30.093549+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:31.093817+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:32.094087+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:33.094430+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:34.094741+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:35.095109+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:36.095315+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:37.095451+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:38.095583+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:39.095735+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:40.095907+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:41.096105+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:42.096305+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:43.096422+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:44.096558+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:45.096701+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:46.096817+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:47.097108+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:48.097372+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:49.097586+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:50.097698+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:51.097836+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:52.098025+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:53.098192+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:54.098363+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68526080 unmapped: 622592 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:55.098480+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:56.098614+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:57.098845+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:58.099025+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:59.099355+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:00.099595+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:01.099792+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:02.100061+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:03.100332+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:04.100483+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:05.100620+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:06.100795+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:07.101025+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:08.101238+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:09.101430+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:10.101607+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:11.101801+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:12.102052+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:13.102389+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:14.102580+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:15.102737+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:16.102902+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:17.103205+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:18.103456+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:19.103708+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:20.103940+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:21.104200+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:22.104505+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:23.104721+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:24.104883+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:25.105008+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:26.105214+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:27.105470+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:28.105630+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:29.105819+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:30.105951+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:31.106103+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:32.106405+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:33.106618+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:34.106811+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:35.106964+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 614400 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:36.107178+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:37.107342+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:38.107557+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:39.107731+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:40.107863+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:41.108067+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:42.108391+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:43.108558+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:44.108705+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:45.123319+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:46.123483+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:47.123733+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:48.123955+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:49.124139+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:50.124383+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:51.124570+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:52.124816+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:53.124968+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:54.125112+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:55.125292+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 606208 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:56.125556+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:57.125858+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:58.126032+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:59.126180+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:00.126372+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:01.126635+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:02.126848+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:03.126993+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:04.127236+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:05.127477+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:06.127672+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:07.127826+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:08.127985+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:09.128165+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:10.128303+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68550656 unmapped: 598016 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:11.128443+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:12.128610+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:13.128732+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:14.128836+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68558848 unmapped: 589824 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:15.128929+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:16.129062+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:17.129209+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:18.129332+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:19.129476+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:20.129654+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:21.129937+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:22.130097+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:23.130227+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68567040 unmapped: 581632 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:24.130477+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc ms_handle_reset ms_handle_reset con 0x557761d1dc00
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: get_auth_request con 0x557764265800 auth_method 0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_configure stats_period=5
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:25.130624+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:26.130768+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:27.130943+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:28.131059+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:29.131223+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:30.131395+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:31.131653+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:32.131917+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 sshd-session[292359]: Received disconnect from 192.161.60.110 port 57904:11: Bye Bye [preauth]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:33.132097+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:34.132234+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 sshd-session[292359]: Disconnected from invalid user under 192.161.60.110 port 57904 [preauth]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:35.132362+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:36.132575+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:37.132806+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:38.132999+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:39.133245+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:40.133513+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:41.133616+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:42.133761+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:43.133920+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:44.134107+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:45.134218+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:46.134323+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:47.134432+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:48.134612+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:49.134809+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:50.135019+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:51.135183+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:52.135391+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:53.135511+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:54.135667+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:55.135823+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:56.135942+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:57.136078+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:58.136226+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:59.136351+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:00.136467+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:01.136579+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:02.136960+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:03.137140+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:04.137340+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:05.137501+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:06.137697+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:07.137838+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:08.137988+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:09.138108+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:10.138324+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 393216 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:11.138543+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:12.138725+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:13.138839+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:14.139055+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:15.139210+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:16.139363+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:17.139504+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:18.139665+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:19.139814+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:20.139980+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:21.140148+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:22.140373+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:23.140552+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:24.140686+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:25.140898+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:26.141149+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:27.141345+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:28.141507+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:29.141650+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:30.141813+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:31.141997+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:32.142880+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:33.143030+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:34.143160+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:35.143357+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:36.143499+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:37.143645+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:38.143845+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:39.144029+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:40.144232+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:41.144373+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:42.144557+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:43.144676+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:44.144817+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:45.144956+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:46.145096+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:47.145254+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:48.145487+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:49.145680+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:50.145865+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:51.146092+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:52.146356+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:53.146501+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:54.146704+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:55.146861+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:56.147055+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:57.147198+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:58.147363+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:59.147480+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:00.147604+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:01.147768+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:02.147945+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:03.148056+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:04.148247+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:05.148438+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:06.148605+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:07.148747+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:08.148860+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:09.149180+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:10.149496+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:11.149814+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:12.151061+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:13.151295+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:14.151632+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:15.151823+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:16.152051+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:17.152251+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:18.152502+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:19.152730+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:20.152986+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:21.153203+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:22.153568+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:23.153800+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:24.154038+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:25.154309+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:26.154516+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:27.154701+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:28.154849+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:29.155034+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:30.155192+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:31.155322+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:32.155613+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:33.155832+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:34.156026+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:35.156165+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:36.156380+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:37.156499+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:38.156630+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:39.156804+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:40.156997+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:41.157148+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:42.157756+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:43.157924+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:44.158041+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:45.158187+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:46.158345+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:47.158490+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:48.158673+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:49.158910+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:50.159088+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:51.159300+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:52.159596+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:53.159848+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:54.160144+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68780032 unmapped: 368640 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:55.160305+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:56.160506+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:57.160841+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:58.161120+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:59.161410+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:00.161702+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:01.161952+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:02.162223+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:03.162429+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:04.162658+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:05.162906+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:06.163225+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:07.163567+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:08.163828+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:09.164026+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:10.164246+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:11.164468+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:12.164709+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:13.165017+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:14.165421+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 352256 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:15.165619+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 335872 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:16.165769+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:17.165908+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:18.166070+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:19.166467+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:20.166659+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:21.166872+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:22.167103+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:23.167383+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:24.167644+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:25.167906+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:26.168120+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:27.168444+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:28.168596+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:29.168776+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68820992 unmapped: 327680 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:30.168994+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:31.169215+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:32.169431+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:33.169622+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:34.169803+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68829184 unmapped: 319488 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:35.169986+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:36.193432+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:37.193690+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:38.193987+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:39.194235+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:40.194525+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:41.194768+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:42.195066+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:43.195399+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:44.195628+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:45.195950+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:46.196258+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:47.196503+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:48.196661+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:49.196861+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:50.197065+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:51.197340+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:52.197729+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:53.197992+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:54.198156+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68845568 unmapped: 303104 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:55.198335+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:56.198446+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:57.198635+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:58.198818+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:59.199023+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:00.199160+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:01.199345+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:02.199538+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:03.199729+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:04.199924+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:05.200112+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:06.200326+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:07.200486+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:08.200622+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:09.200744+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 278528 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:10.200917+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 417792 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:11.201091+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:12.201236+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:13.201407+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:14.201564+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:15.201720+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:16.201884+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:17.202022+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:18.202155+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:19.202371+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:20.202500+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:21.202629+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:22.202801+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:23.202936+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:24.203090+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:25.203224+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:26.203369+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:27.203535+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:28.203819+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68747264 unmapped: 401408 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:29.204061+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:30.204370+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:31.204600+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:32.204869+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:33.205178+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68763648 unmapped: 385024 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:34.205426+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:35.205705+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:36.205907+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:37.206137+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:38.206338+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:39.206607+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:40.206820+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:41.207036+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:42.207298+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:43.207471+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:44.207623+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:45.207781+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:46.207916+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:47.208067+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:48.208230+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68771840 unmapped: 376832 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:49.208487+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:50.208716+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:51.208961+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:52.209259+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:53.209518+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:54.209693+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:55.209891+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:56.210025+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:57.210189+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:58.210372+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:59.210516+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:00.210696+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:01.210878+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:02.211039+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:03.211224+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:04.211386+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:05.211571+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:06.211754+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:07.211910+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:08.212208+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68788224 unmapped: 360448 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:09.212389+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:10.212610+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:11.212837+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:12.213102+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:13.213309+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:14.213487+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:15.213660+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:16.213819+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:17.213986+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 344064 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5631 writes, 23K keys, 5631 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5631 writes, 860 syncs, 6.55 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x557761bc6dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:18.214166+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:19.214384+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:20.214599+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:21.214775+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:22.215011+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:23.215161+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:24.215343+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:25.215503+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:26.215662+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:27.215843+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:28.215990+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68837376 unmapped: 311296 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:29.216110+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:30.216375+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:31.216525+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:32.216734+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:33.216871+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:34.217015+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:35.217158+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68853760 unmapped: 294912 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:36.217326+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:37.217508+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:38.217782+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:39.217981+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:40.218152+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:41.218348+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:42.218556+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:43.218711+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:44.218885+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:45.219084+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:46.219319+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:47.219494+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:48.219652+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:49.219812+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:50.219968+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:51.220141+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:52.220335+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:53.220669+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:54.220848+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:55.221133+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:56.221399+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:57.221674+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:58.221913+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:59.222058+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:00.222227+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:01.222439+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:02.222672+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:03.222885+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:04.223047+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:05.223203+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:06.223367+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:07.223563+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:08.223805+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68861952 unmapped: 286720 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:09.224001+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68878336 unmapped: 270336 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:10.224173+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.950073242s of 600.213012695s, submitted: 90
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 68902912 unmapped: 245760 heap: 69148672 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:11.224307+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:12.224467+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:13.224636+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:14.224863+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:15.225057+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:16.225198+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:17.225365+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:18.225571+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:19.225781+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:20.225936+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:21.226086+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:22.226259+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:23.226422+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:24.226611+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:25.226766+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:26.226951+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:27.227073+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:28.227246+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:29.227495+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:30.227641+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:31.227784+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:32.228016+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:33.228183+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 974848 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:34.228386+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:35.228521+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:36.228654+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:37.228814+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:38.228957+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:39.229098+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:40.229246+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:41.229466+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:42.229663+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:43.229795+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:44.229939+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:45.230063+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:46.230230+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:47.230444+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:48.230653+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:49.230862+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:50.231075+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:51.231227+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:52.231421+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:53.231624+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:54.231787+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:55.232026+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:56.232188+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:57.232384+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:58.232576+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:59.232725+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:00.232858+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:01.233028+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:02.233203+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:03.233400+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:04.233578+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:05.233719+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:06.233873+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:07.234035+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:08.234167+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:09.234349+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:10.234523+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:11.234699+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:12.234879+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:13.235061+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:14.235213+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:15.235432+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:16.235630+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:17.235809+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:18.236020+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:19.236230+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:20.236449+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:21.236632+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:22.236811+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:23.237010+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:24.237158+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:25.237328+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:26.237492+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:27.237627+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:28.237775+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:29.238221+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:30.238562+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:31.238748+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:32.238994+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:33.239193+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:34.239380+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:35.239682+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:36.239830+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:37.240065+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:38.240340+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:39.240566+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:40.240749+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:41.240903+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:42.241073+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:43.241360+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:44.241538+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:45.241694+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:46.241851+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:47.241991+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:48.242113+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:49.242335+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:50.242555+0000)
Nov 29 05:54:10 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14887 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:51.242710+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:52.242961+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:53.243120+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:54.243310+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:55.243483+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:56.243655+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:57.243858+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:58.244079+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:59.244249+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:00.244485+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:01.244680+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:02.245039+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:03.245342+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:04.245572+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:05.245837+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:06.246165+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:07.246425+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:08.246656+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:09.246855+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:10.247027+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:11.247215+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70279168 unmapped: 966656 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:12.247594+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:13.247889+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:14.248141+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:15.248350+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:16.248563+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:17.248839+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:18.249047+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:19.249336+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:20.249641+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:21.249844+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:22.250074+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:23.250249+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:24.250458+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:25.250595+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:26.250828+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:27.250989+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:28.251180+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:29.251373+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:30.251546+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:31.251767+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:32.252048+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:33.252327+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:34.252496+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:35.252675+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:36.252843+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:37.252989+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:38.253215+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:39.253421+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:40.253582+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 958464 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:41.253932+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:42.254314+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:43.254582+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:44.254755+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:45.254891+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:46.255101+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:47.255339+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:48.255524+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:49.255709+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:50.255883+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:51.256094+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:52.256238+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:53.256402+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:54.256562+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:55.256722+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:56.256894+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:57.257065+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:58.257219+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:59.257365+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:00.257510+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:01.257717+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:02.257925+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:03.258120+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:04.258306+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:05.258580+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:06.258817+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:07.259016+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:08.259256+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:09.259482+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:10.259651+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:11.259960+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:12.260188+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:13.260364+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70295552 unmapped: 950272 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:14.260525+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:15.260665+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:16.260947+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:17.261256+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:18.261539+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:19.261731+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:20.261941+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:21.262210+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:22.262538+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:23.262680+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:24.262792+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:25.262914+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 942080 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:26.263355+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:27.263560+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817343 data_alloc: 218103808 data_used: 172032
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:28.263822+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:29.264111+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab9000/0x0/0x4ffc00000, data 0xb7abd/0x165000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 933888 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557763f08000
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:30.264338+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 200.325714111s of 200.562088013s, submitted: 90
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:31.264529+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 876544 heap: 71245824 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:32.264705+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 17432576 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 916268 data_alloc: 218103808 data_used: 180224
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 123 ms_handle_reset con 0x557763f08000 session 0x5577631b30e0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:33.264882+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 17440768 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557765b97c00
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:34.265085+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70549504 unmapped: 17481728 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:35.265233+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 124 ms_handle_reset con 0x557765b97c00 session 0x557765010000
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbe39000/0x0/0x4ffc00000, data 0xd2e970/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70623232 unmapped: 17408000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:36.265420+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:37.265632+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925293 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:38.265860+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:39.266071+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:40.266322+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:41.266568+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 124 heartbeat osd_stat(store_statfs(0x4fbe38000/0x0/0x4ffc00000, data 0xd2e993/0xde4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.306422234s of 10.512654305s, submitted: 45
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:42.266786+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:43.266966+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:44.267166+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:45.267366+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:46.267560+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:47.267761+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:48.267916+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:49.268131+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:50.268361+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:51.268554+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:52.268808+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927243 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:53.269026+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557765b96000
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.103341103s of 12.113625526s, submitted: 13
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:54.269228+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 17391616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 10
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:55.269377+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:56.269591+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:57.269730+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929899 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:58.269935+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70647808 unmapped: 17383424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:59.270106+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:00.270252+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:01.270399+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:02.270651+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70656000 unmapped: 17375232 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929899 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 11
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:03.270826+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557765b96400
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.626058578s of 10.632491112s, submitted: 2
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:04.270999+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:05.271145+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70672384 unmapped: 17358848 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:06.271319+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:07.271453+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929723 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:08.271598+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:09.271777+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:10.272240+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 17350656 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:11.272612+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:12.272897+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929723 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:13.273114+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe35000/0x0/0x4ffc00000, data 0xd3052c/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:14.273248+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:15.273482+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:16.273639+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.001748085s of 12.013872147s, submitted: 4
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:17.274080+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928857 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:18.274339+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:19.274482+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:20.274674+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:21.274891+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:22.275029+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928857 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:23.275156+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe36000/0x0/0x4ffc00000, data 0xd30491/0xde8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:24.275299+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:25.275485+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:26.276446+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:27.276750+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930625 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:28.276995+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.040862083s of 12.053675652s, submitted: 4
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:29.277233+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:30.277460+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:31.277637+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:32.277917+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928167 data_alloc: 218103808 data_used: 184320
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:33.278094+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:34.278233+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 17342464 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:35.278418+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fbe37000/0x0/0x4ffc00000, data 0xd303f6/0xde7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 125 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:36.278627+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:37.278759+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931461 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:38.278878+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:39.279016+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:40.279177+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:41.279486+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:42.279691+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 126 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931461 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:43.279814+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:44.279993+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70696960 unmapped: 17334272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.859819412s of 16.871786118s, submitted: 28
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe34000/0x0/0x4ffc00000, data 0xd31fdc/0xdea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:45.280172+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557765b96800
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 17309696 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:46.280301+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 17309696 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 12
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:47.280425+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70795264 unmapped: 17235968 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 939171 data_alloc: 218103808 data_used: 200704
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:48.280592+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 17219584 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:49.280712+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe2e000/0x0/0x4ffc00000, data 0xd33b54/0xdee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 17219584 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:50.280823+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:51.280940+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:52.281073+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:53.281176+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 17211392 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:54.281335+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:55.281430+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:56.281555+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:57.281698+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:58.281844+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:59.282008+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:00.282134+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd33a3f/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:01.282212+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:02.282404+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 17203200 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934755 data_alloc: 218103808 data_used: 200704
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:03.282568+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.211776733s of 18.236698151s, submitted: 18
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:04.282714+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 129 heartbeat osd_stat(store_statfs(0x4fbe29000/0x0/0x4ffc00000, data 0xd372c6/0xdf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:05.282827+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 17121280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:06.282937+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 17104896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:07.283085+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fbe25000/0x0/0x4ffc00000, data 0xd38edc/0xdf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 17104896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951491 data_alloc: 218103808 data_used: 208896
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:08.283204+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fbe23000/0x0/0x4ffc00000, data 0xd3aaf2/0xdfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:09.283341+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:10.283461+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:11.283642+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe1f000/0x0/0x4ffc00000, data 0xd3c793/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70950912 unmapped: 17080320 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:12.283789+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955423 data_alloc: 218103808 data_used: 212992
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:13.283901+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.339121819s of 10.671369553s, submitted: 123
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:14.284046+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 17063936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:15.284242+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fbe20000/0x0/0x4ffc00000, data 0xd3c793/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 17047552 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:16.284481+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 15990784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:17.284648+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe19000/0x0/0x4ffc00000, data 0xd3fd71/0xe03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 15990784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961321 data_alloc: 218103808 data_used: 221184
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:18.284770+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:19.284950+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:20.285079+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 15949824 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:21.285193+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 134 heartbeat osd_stat(store_statfs(0x4fbe1c000/0x0/0x4ffc00000, data 0xd3fcd6/0xe02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:22.285336+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:23.285472+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959079 data_alloc: 218103808 data_used: 221184
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:24.285603+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.049346924s of 10.169968605s, submitted: 40
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:25.285734+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:26.286176+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:27.286382+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 15941632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:28.286588+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965021 data_alloc: 218103808 data_used: 229376
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:29.286705+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe17000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:30.286889+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:31.287099+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 15925248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:32.287337+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 15917056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:33.287467+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 15884288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964141 data_alloc: 218103808 data_used: 229376
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fbe18000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe18000/0x0/0x4ffc00000, data 0xd417f4/0xe06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:34.287664+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 15892480 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:35.287769+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 15892480 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.991994858s of 11.068979263s, submitted: 40
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:36.287880+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:37.287988+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe14000/0x0/0x4ffc00000, data 0xd433da/0xe09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:38.288143+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968139 data_alloc: 218103808 data_used: 237568
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:39.288353+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 15884288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:40.288450+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:41.288588+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:42.288758+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 15876096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:43.289009+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970423 data_alloc: 218103808 data_used: 237568
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:44.289146+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:45.289341+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:46.289595+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:47.289800+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 15859712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.993079185s of 12.021212578s, submitted: 14
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44da2/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:48.290245+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 15851520 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970423 data_alloc: 218103808 data_used: 237568
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:49.290487+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 15851520 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:50.290688+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:51.290915+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:52.291215+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:53.291337+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:54.291495+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:55.291711+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:56.291952+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:57.292215+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 15835136 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.520608902s of 10.532555580s, submitted: 3
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:58.292462+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:59.292679+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:00.292898+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:01.293100+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:02.293337+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:03.293512+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:04.293642+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:05.293834+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:06.294028+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:07.294215+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:08.294360+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971311 data_alloc: 218103808 data_used: 237568
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:09.294557+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 15818752 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:10.294768+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 15802368 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:11.294910+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.498485565s of 13.504686356s, submitted: 2
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 ms_handle_reset con 0x557765b96800 session 0x557764f4fe00
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:12.295092+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd44e3d/0xe0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 13
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:13.295219+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971135 data_alloc: 218103808 data_used: 237568
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:14.295391+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 15015936 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:15.295510+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd44f73/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:16.295670+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:17.295824+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 14991360 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:18.295955+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 14983168 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974495 data_alloc: 218103808 data_used: 237568
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd44f73/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:19.296131+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:20.296248+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0a000/0x0/0x4ffc00000, data 0xd48629/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:21.296485+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:22.296727+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0a000/0x0/0x4ffc00000, data 0xd48629/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:23.296861+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.806947708s of 11.988073349s, submitted: 235
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980069 data_alloc: 218103808 data_used: 245760
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:24.297032+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 139 heartbeat osd_stat(store_statfs(0x4fbe0b000/0x0/0x4ffc00000, data 0xd4858e/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:25.297192+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:26.297343+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:27.297498+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:28.297722+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:29.297897+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:30.298085+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:31.298301+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:32.298539+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:33.298683+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:34.298898+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:35.299150+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:36.299357+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:37.299588+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:38.299725+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:39.299845+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:40.300016+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:41.300166+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:42.300418+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:43.300602+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982371 data_alloc: 218103808 data_used: 245760
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:44.300788+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:45.300967+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe09000/0x0/0x4ffc00000, data 0xd49ff1/0xe14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:46.301139+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:47.301346+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.121107101s of 24.133726120s, submitted: 13
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd4a08c/0xe15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:48.301529+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 14966784 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984139 data_alloc: 218103808 data_used: 245760
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:49.301652+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:50.301817+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd4a127/0xe16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:51.301962+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 14893056 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:52.302152+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:53.302288+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986619 data_alloc: 218103808 data_used: 245760
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd4a186/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:54.302478+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:55.302612+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd4a186/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:56.302752+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 14884864 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:57.302846+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:58.302958+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986571 data_alloc: 218103808 data_used: 245760
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:59.303120+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 14876672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.074976921s of 12.103597641s, submitted: 7
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:00.303219+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe06000/0x0/0x4ffc00000, data 0xd4a157/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:01.303376+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:02.303498+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:03.303619+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 14852096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988043 data_alloc: 218103808 data_used: 245760
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fbe06000/0x0/0x4ffc00000, data 0xd4a185/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:04.303765+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 14843904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 140 handle_osd_map epochs [141,142], i have 140, src has [1,142]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:05.303889+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:06.304012+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:07.304128+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:08.304381+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993155 data_alloc: 218103808 data_used: 253952
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fbe01000/0x0/0x4ffc00000, data 0xd4d8db/0xe1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:09.304667+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:10.304839+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 13713408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.097406387s of 11.276707649s, submitted: 61
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:11.304939+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:12.305172+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:13.305361+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 13672448 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997329 data_alloc: 218103808 data_used: 262144
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:14.305551+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f327/0xe1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:15.305785+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:16.305935+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:17.306060+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 13639680 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:18.306251+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4f3f4/0xe20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998041 data_alloc: 218103808 data_used: 262144
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:19.306453+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:20.306589+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 13656064 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.991775513s of 10.043452263s, submitted: 26
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:21.306721+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74211328 unmapped: 13819904 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:22.306889+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:23.307003+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000645 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:24.307140+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74203136 unmapped: 13828096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:25.307384+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:26.307524+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e25/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:27.307674+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:28.307805+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 13778944 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002413 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:29.308145+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50f7f/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:30.308318+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:31.308454+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 13737984 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:32.308626+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.350621223s of 11.425502777s, submitted: 31
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:33.308764+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf5000/0x0/0x4ffc00000, data 0xd51047/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007621 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:34.308916+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:35.309078+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 13664256 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:36.309238+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 12541952 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:37.309364+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf2000/0x0/0x4ffc00000, data 0xd511a7/0xe28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 12517376 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:38.309501+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 12517376 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011157 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:39.309717+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 12509184 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:40.309855+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 12460032 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:41.309976+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 12451840 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf3000/0x0/0x4ffc00000, data 0xd5117b/0xe28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:42.310134+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 12451840 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:43.310317+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.751610756s of 11.044014931s, submitted: 37
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 12419072 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010409 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:44.310502+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:45.310729+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:46.310971+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 12394496 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf4000/0x0/0x4ffc00000, data 0xd510b1/0xe27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:47.311185+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 12386304 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:48.311317+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50fe8/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010199 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:49.311471+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:50.311671+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf6000/0x0/0x4ffc00000, data 0xd50fb7/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:51.311786+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:52.311976+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:53.312170+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006959 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:54.312325+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.834184647s of 10.926655769s, submitted: 30
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50dbd/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:55.312496+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:56.312637+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:57.312780+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 12353536 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:58.312941+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008855 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:59.313061+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:00.315401+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:01.315521+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50e84/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 12296192 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:02.315685+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:03.315800+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010495 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:04.315921+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.022552490s of 10.158326149s, submitted: 18
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:05.316091+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:06.316293+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e58/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:07.316418+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e58/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:08.316596+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 12288000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008551 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:09.316724+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:10.316902+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:11.317032+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 12271616 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:12.317243+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfc000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:13.317422+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007861 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:14.317822+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:15.318030+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:16.318216+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.966061592s of 12.095813751s, submitted: 15
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:17.318432+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50dbc/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:18.318612+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007861 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:19.318773+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:20.318930+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 12263424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfb000/0x0/0x4ffc00000, data 0xd50d8a/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:21.319108+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:22.319334+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:23.319472+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1009501 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:24.319632+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:25.319817+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:26.320013+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd50e51/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 12214272 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.408202171s of 10.675523758s, submitted: 17
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:27.320141+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:28.320341+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011349 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:29.320531+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50f47/0xe24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:30.320657+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:31.320811+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 75857920 unmapped: 12173312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:32.321044+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12001280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd526bd/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:33.321195+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 12001280 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1018011 data_alloc: 218103808 data_used: 278528
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:34.321334+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbded000/0x0/0x4ffc00000, data 0xd5a18b/0xe2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:35.321460+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:36.321613+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 10010624 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:37.321712+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.418998718s of 10.583705902s, submitted: 59
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 9740288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbdc6000/0x0/0x4ffc00000, data 0xd839d6/0xe57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,3])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:38.321849+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 9740288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1024267 data_alloc: 218103808 data_used: 278528
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:39.322008+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fbdb9000/0x0/0x4ffc00000, data 0xd9187a/0xe64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,1] op hist [1])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 80814080 unmapped: 7217152 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:40.322105+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 6660096 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:41.322237+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 6668288 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:42.322414+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fabc9000/0x0/0x4ffc00000, data 0xde24b0/0xeb5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 6643712 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:43.322564+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028237 data_alloc: 218103808 data_used: 278528
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 6725632 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:44.322719+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 6709248 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 145 heartbeat osd_stat(store_statfs(0x4fab98000/0x0/0x4ffc00000, data 0xe11be3/0xee5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:45.322835+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 5660672 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:46.322989+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 5603328 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:47.323114+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.068192482s of 10.000307083s, submitted: 80
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fab94000/0x0/0x4ffc00000, data 0xe13646/0xee8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5120000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:48.323255+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1037963 data_alloc: 218103808 data_used: 286720
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 5120000 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 146 heartbeat osd_stat(store_statfs(0x4fab47000/0x0/0x4ffc00000, data 0xe61e3d/0xf36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:49.323446+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 4784128 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:50.323567+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 4300800 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:51.323692+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 4071424 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:52.323857+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3792896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faaea000/0x0/0x4ffc00000, data 0xebf2b3/0xf94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:53.323979+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043289 data_alloc: 218103808 data_used: 294912
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 3792896 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:54.324129+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3252224 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:55.324234+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faab2000/0x0/0x4ffc00000, data 0xef3a69/0xfca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 2957312 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:56.324405+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 2662400 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:57.324583+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faaa0000/0x0/0x4ffc00000, data 0xf07fb6/0xfde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.752583504s of 10.000064850s, submitted: 81
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 1425408 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:58.324729+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048977 data_alloc: 218103808 data_used: 294912
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86614016 unmapped: 1417216 heap: 88031232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:59.324871+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 147 heartbeat osd_stat(store_statfs(0x4faa4a000/0x0/0x4ffc00000, data 0xf5e02d/0x1034000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:00.324972+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86704128 unmapped: 2375680 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:01.325118+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 2940928 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:02.325301+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2990080 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:03.325410+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa5ca000/0x0/0x4ffc00000, data 0xfc91da/0x10a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067411 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 2990080 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa5ca000/0x0/0x4ffc00000, data 0xfc91da/0x10a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:04.325536+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 1630208 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:05.325712+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:06.325838+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87736320 unmapped: 1343488 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:07.325952+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.663159370s of 10.000439644s, submitted: 117
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86867968 unmapped: 2211840 heap: 89079808 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:08.326074+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080451 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86933504 unmapped: 3194880 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557763f08000
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:09.326175+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa4ea000/0x0/0x4ffc00000, data 0x10a58fb/0x1183000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 86908928 unmapped: 3219456 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:10.326312+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 14
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 2875392 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:11.326490+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 87252992 unmapped: 2875392 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa4e3000/0x0/0x4ffc00000, data 0x10afbb3/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:12.326727+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 1466368 heap: 90128384 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:13.326899+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1093257 data_alloc: 218103808 data_used: 307200
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:14.327054+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89341952 unmapped: 1835008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:15.327180+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa436000/0x0/0x4ffc00000, data 0x1157f2b/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89358336 unmapped: 1818624 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:16.327318+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89808896 unmapped: 1368064 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:17.327437+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7984 writes, 30K keys, 7984 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 7984 writes, 1865 syncs, 4.28 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2353 writes, 6787 keys, 2353 commit groups, 1.0 writes per commit group, ingest: 7.64 MB, 0.01 MB/s
                                           Interval WAL: 2353 writes, 1005 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:18.327592+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.250799179s of 10.555690765s, submitted: 96
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157dbe/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088671 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:19.327774+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:20.327942+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:21.328110+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157df1/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157df1/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:22.328241+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:23.328411+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087325 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88678400 unmapped: 2498560 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:24.328568+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc ms_handle_reset ms_handle_reset con 0x557764265800
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: get_auth_request con 0x557765b96800 auth_method 0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_configure stats_period=5
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88850432 unmapped: 2326528 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:25.328705+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:26.328839+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157d55/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157d55/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:27.328989+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88858624 unmapped: 2318336 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:28.329118+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.960803032s of 10.004592896s, submitted: 14
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087841 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88866816 unmapped: 2310144 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:29.329229+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157cb6/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:30.329464+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:31.329808+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88875008 unmapped: 2301952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:32.329974+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88883200 unmapped: 2293760 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:33.330175+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089913 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 2285568 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:34.330402+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa439000/0x0/0x4ffc00000, data 0x1157db0/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:35.330591+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:36.330748+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:37.330930+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:38.331106+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c19/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1087327 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:39.331241+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c19/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88907776 unmapped: 2269184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:40.331347+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.876619339s of 11.963118553s, submitted: 28
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157c4c/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:41.331471+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:42.331635+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88924160 unmapped: 2252800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:43.331758+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090125 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:44.331872+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:45.332009+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:46.332137+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x1157ce1/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:47.332278+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:48.332437+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x1157d0c/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089131 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:49.332609+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157c46/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:50.332770+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:51.332893+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x1157b7f/0x1230000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:52.333147+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.725197792s of 12.809599876s, submitted: 25
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:53.333358+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089729 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:54.333546+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:55.333762+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x1157c1a/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:56.333915+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:57.334049+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:58.334166+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1089729 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:59.334319+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 heartbeat osd_stat(store_statfs(0x4fa43d000/0x0/0x4ffc00000, data 0x1157c1a/0x1231000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88932352 unmapped: 2244608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:00.334450+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 2236416 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:01.334581+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88940544 unmapped: 2236416 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:02.334727+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88948736 unmapped: 2228224 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.085176468s of 10.108474731s, submitted: 6
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:03.334842+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092185 data_alloc: 218103808 data_used: 311296
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:04.334975+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x115969d/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:05.335174+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88965120 unmapped: 2211840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:06.335309+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 149 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x115969d/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:07.335434+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:08.335574+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090523 data_alloc: 218103808 data_used: 311296
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:09.335735+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88981504 unmapped: 2195456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:10.335904+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88997888 unmapped: 2179072 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:11.336062+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 88915968 unmapped: 2260992 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:12.336231+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 15
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa436000/0x0/0x4ffc00000, data 0x115cc1b/0x1236000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,1])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:13.336368+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.059730530s of 10.469105721s, submitted: 159
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100111 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:14.336501+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:15.336633+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:16.336864+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:17.337032+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89063424 unmapped: 2113536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 151 heartbeat osd_stat(store_statfs(0x4fa435000/0x0/0x4ffc00000, data 0x115ccb6/0x1237000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:18.337234+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100111 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:19.337354+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:20.337529+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fa433000/0x0/0x4ffc00000, data 0x115e719/0x123a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:21.337690+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:22.337894+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:23.338051+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 152 heartbeat osd_stat(store_statfs(0x4fa432000/0x0/0x4ffc00000, data 0x115e7b4/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104005 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:24.338179+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.492309570s of 11.524030685s, submitted: 14
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:25.338315+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:26.338449+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:27.338583+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:28.338715+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa431000/0x0/0x4ffc00000, data 0x115e8c4/0x123d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111955 data_alloc: 218103808 data_used: 327680
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:29.338836+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89079808 unmapped: 2097152 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:30.338994+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89088000 unmapped: 2088960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:31.339148+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 153 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x11605e0/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89096192 unmapped: 2080768 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:32.339309+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:33.339429+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1117181 data_alloc: 218103808 data_used: 335872
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:34.339623+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 154 heartbeat osd_stat(store_statfs(0x4fa429000/0x0/0x4ffc00000, data 0x1161ec8/0x1243000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:35.339902+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89120768 unmapped: 2056192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.267296791s of 10.416739464s, submitted: 51
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:36.340214+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89120768 unmapped: 2056192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:37.340405+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89128960 unmapped: 2048000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:38.340546+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 155 heartbeat osd_stat(store_statfs(0x4fa42c000/0x0/0x4ffc00000, data 0x1161e2d/0x1242000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1120383 data_alloc: 218103808 data_used: 344064
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:39.340685+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:40.340824+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89137152 unmapped: 2039808 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:41.341024+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 2031616 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 155 ms_handle_reset con 0x557763f08000 session 0x55776350d0e0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:42.341218+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 598016 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 156 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0x1165511/0x1249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 16
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:43.341391+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1124387 data_alloc: 218103808 data_used: 344064
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:44.341538+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:45.341756+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:46.341909+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.574859619s of 10.815853119s, submitted: 264
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:47.342077+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 157 heartbeat osd_stat(store_statfs(0x4fa422000/0x0/0x4ffc00000, data 0x1167127/0x124c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:48.342207+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 565248 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 157 handle_osd_map epochs [158,159], i have 157, src has [1,159]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137003 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:49.342333+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:50.342459+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:51.342641+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 557056 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:52.342840+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c441/0x1256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:53.342966+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:54.343079+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1139835 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:55.343215+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:56.343392+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.203613281s of 10.384685516s, submitted: 64
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c441/0x1256000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:57.343509+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:58.343633+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:59.343776+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141625 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 160 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116c4dc/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:00.344002+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91602944 unmapped: 622592 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 17
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:01.344228+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91611136 unmapped: 614400 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:02.344508+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91611136 unmapped: 614400 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: handle_auth_request added challenge on 0x557764264c00
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:03.344652+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91619328 unmapped: 606208 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:04.344801+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153121 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91652096 unmapped: 1622016 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:05.344942+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x116fc59/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:06.345119+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:07.345294+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.928189278s of 10.833756447s, submitted: 92
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:08.345456+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:09.345544+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146159 data_alloc: 218103808 data_used: 364544
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 162 heartbeat osd_stat(store_statfs(0x4fa416000/0x0/0x4ffc00000, data 0x116fa7c/0x1258000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:10.345654+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91660288 unmapped: 1613824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:11.345801+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:12.345967+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:13.346089+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:14.346230+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:15.346356+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:16.346539+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:17.346660+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:18.346788+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:19.347167+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:20.347320+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:21.347433+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:22.347599+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:23.347720+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:24.347882+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91668480 unmapped: 1605632 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:25.348033+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:26.348135+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:27.348299+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:28.348405+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:29.348529+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:30.348665+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:31.348844+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:32.349043+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:33.349170+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:34.349364+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:35.349503+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:36.349664+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91676672 unmapped: 1597440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:37.349864+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:38.350010+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:39.350187+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:40.350331+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:41.350463+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:42.350612+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:43.350748+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:44.350893+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:45.351017+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:46.351156+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:47.351307+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:48.351441+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:49.351523+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:50.351665+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:51.351757+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:52.352412+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91684864 unmapped: 1589248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:53.352546+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:54.352791+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:55.352949+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91693056 unmapped: 1581056 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:56.353505+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91701248 unmapped: 1572864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:57.353693+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:58.353874+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:59.353985+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:00.354125+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:01.354459+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:02.354603+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:03.354720+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:04.354847+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:05.354976+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:06.355090+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:07.355245+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa412000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:08.355338+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:09.355449+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150333 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:10.355587+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:11.355741+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91709440 unmapped: 1564672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 64.228721619s of 64.248054504s, submitted: 16
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 ms_handle_reset con 0x557764264c00 session 0x5577635ba1e0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:12.355927+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91930624 unmapped: 1343488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 18
Nov 29 05:54:10 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:13.356073+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:14.356206+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149453 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:15.356347+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:16.356661+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x117151f/0x125b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:17.356819+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:18.356936+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:19.357066+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149453 data_alloc: 218103808 data_used: 372736
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:20.357198+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:21.357363+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:22.357533+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 heartbeat osd_stat(store_statfs(0x4fa411000/0x0/0x4ffc00000, data 0x11715ba/0x125c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:23.357769+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.423639297s of 11.447608948s, submitted: 183
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:24.357906+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155219 data_alloc: 218103808 data_used: 380928
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:25.358063+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40e000/0x0/0x4ffc00000, data 0x11731a0/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:26.358313+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:27.358444+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:28.358576+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:29.358733+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153649 data_alloc: 218103808 data_used: 380928
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:30.358922+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:31.359093+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:32.359358+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:33.359588+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x1173105/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:34.359739+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153649 data_alloc: 218103808 data_used: 380928
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:35.359872+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.546059608s of 12.619788170s, submitted: 25
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:36.359996+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 164 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x11731a0/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:37.360215+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:38.360336+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:39.360450+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158853 data_alloc: 218103808 data_used: 389120
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:40.360553+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:41.360721+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:42.360898+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91955200 unmapped: 1318912 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 165 heartbeat osd_stat(store_statfs(0x4fa40d000/0x0/0x4ffc00000, data 0x1174b68/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:43.361123+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:44.361328+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159961 data_alloc: 218103808 data_used: 397312
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:45.361458+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:46.361606+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:47.361884+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 166 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:48.362023+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:49.362148+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159961 data_alloc: 218103808 data_used: 397312
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:50.362334+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91979776 unmapped: 1294336 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _renew_subs
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.282610893s of 14.913866997s, submitted: 51
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa40a000/0x0/0x4ffc00000, data 0x11766e3/0x1263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:51.362490+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:52.362687+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:53.362877+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:54.363085+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:55.363228+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:56.363394+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91987968 unmapped: 1286144 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:57.363562+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:58.363686+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:59.363850+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:00.363992+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:01.364164+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:02.364437+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:03.364583+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:04.364719+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:05.364844+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:06.365017+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:07.365140+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:08.365276+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:09.365389+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162935 data_alloc: 218103808 data_used: 397312
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:10.365508+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:11.365708+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:12.365908+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91996160 unmapped: 1277952 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.777770996s of 21.886068344s, submitted: 15
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:13.366056+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:14.366171+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:15.366337+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:16.366518+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:17.366722+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:18.366956+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:19.367122+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:20.367361+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:21.367612+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:22.367917+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:23.368089+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:24.368236+0000)
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:10 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:10 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163095 data_alloc: 218103808 data_used: 401408
Nov 29 05:54:10 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:10 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:10 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:25.368355+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:26.368470+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:27.368589+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:28.368706+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:29.368843+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.618280411s of 16.621215820s, submitted: 1
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa407000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163983 data_alloc: 218103808 data_used: 401408
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:30.369023+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:31.369174+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:32.369345+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:33.369627+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:34.369775+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1163983 data_alloc: 218103808 data_used: 401408
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:35.369939+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:36.370106+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92004352 unmapped: 1269760 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:37.370225+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:38.370401+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:39.370591+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:40.370794+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:41.370970+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:42.371160+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:43.371320+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:44.371517+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:45.371690+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:46.372013+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:47.374542+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:48.374712+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:49.374860+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:50.375017+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:51.375183+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:52.375365+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92012544 unmapped: 1261568 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:53.375551+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92020736 unmapped: 1253376 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:54.375682+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162215 data_alloc: 218103808 data_used: 401408
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92020736 unmapped: 1253376 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:55.375826+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 heartbeat osd_stat(store_statfs(0x4fa408000/0x0/0x4ffc00000, data 0x1178166/0x1266000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 167 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.889047623s of 25.900033951s, submitted: 3
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:56.376008+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:57.376143+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:58.376301+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:59.376456+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166389 data_alloc: 218103808 data_used: 409600
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92028928 unmapped: 1245184 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:00.376620+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:01.376781+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:02.376961+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:03.377080+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:04.377239+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169363 data_alloc: 218103808 data_used: 409600
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:05.377377+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:06.377531+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:07.377685+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:08.377907+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 1236992 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:09.378077+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169363 data_alloc: 218103808 data_used: 409600
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92045312 unmapped: 1228800 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:10.378323+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:11.378628+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:12.378811+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:13.378983+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:14.379110+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:15.379359+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:16.379501+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 1359872 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:17.379627+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:18.379743+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:19.379855+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:20.380152+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:21.380342+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:22.380567+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:23.380757+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:24.380916+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:25.381065+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:26.381421+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:27.381601+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:28.381737+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:29.381862+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:30.381988+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:31.382119+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:32.382310+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:33.382439+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:34.382569+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:35.382731+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:36.382855+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:37.382982+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:38.383136+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:39.383292+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:40.383456+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:41.383651+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:42.383840+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:43.384008+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:44.384139+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:45.384319+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:46.384459+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:47.384594+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91922432 unmapped: 1351680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:48.384772+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:49.384911+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:50.385104+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:51.385259+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:52.385465+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:53.385625+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:54.385761+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:55.385933+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91938816 unmapped: 1335296 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:56.386066+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92053504 unmapped: 1220608 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:57.386224+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'config show' '{prefix=config show}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92037120 unmapped: 2285568 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:58.392096+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 91971584 unmapped: 2351104 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:59.392358+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'log dump' '{prefix=log dump}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 103022592 unmapped: 2342912 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:00.392520+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'perf dump' '{prefix=perf dump}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'perf schema' '{prefix=perf schema}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:01.392693+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:02.392869+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:03.393000+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:04.393126+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:05.393254+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:06.393409+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:07.393526+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa401000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:08.393652+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92200960 unmapped: 13164544 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:09.393795+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169523 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 13156352 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:10.393957+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92209152 unmapped: 13156352 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:11.394076+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 75.936843872s of 76.001602173s, submitted: 35
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 ms_handle_reset con 0x557765b96000 session 0x5577650065a0
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:12.394217+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Got map version 19
Nov 29 05:54:11 compute-0 ceph-osd[91343]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:13.394339+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:14.394551+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:15.394684+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:16.394815+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:17.394937+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:18.395100+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:19.395232+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:20.395329+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:21.395473+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:22.395633+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:23.395758+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:24.395877+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:25.395999+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:26.396125+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:27.396222+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:28.396382+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:29.396519+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:30.396633+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:31.396747+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:32.397139+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:33.397346+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:34.397496+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:35.397672+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:36.397852+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:37.398028+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:38.398222+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:39.398400+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:40.398555+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:41.398675+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:42.399120+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:43.399244+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:44.399444+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:45.399561+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:46.399680+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:47.399825+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:48.399953+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:49.400095+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:50.400215+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:51.400337+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:52.400503+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:53.400631+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:54.400769+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:55.401031+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:56.401229+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:57.401395+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:58.401654+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:59.401801+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:00.401948+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:01.402074+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:02.402247+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:03.402507+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:04.402886+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:05.403042+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:06.403317+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:07.403522+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:08.403722+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:09.403954+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:10.404188+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:11.404347+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:12.404532+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:13.404657+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:14.404838+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:15.405037+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:16.405350+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:17.405586+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:18.405748+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:19.405921+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:20.406066+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:21.406325+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:22.406595+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:23.406802+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:24.406929+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:25.407070+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92545024 unmapped: 12820480 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:26.407239+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:27.407432+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:28.407612+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:29.407745+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:30.407893+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:31.408066+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:32.408232+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:33.408386+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:34.408514+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:35.408728+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:36.408857+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:37.409034+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:38.409149+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:39.409370+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:40.409497+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:41.409882+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:42.410066+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:43.410249+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:44.410403+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:45.410597+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:46.410789+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:47.410925+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:48.411130+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:49.411307+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:50.411471+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:51.411605+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:52.411779+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:53.411995+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:54.412115+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:55.412358+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:56.412516+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92553216 unmapped: 12812288 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:57.412767+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:58.412937+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:59.413206+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:00.413414+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:01.413573+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:02.413735+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:03.413929+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:04.414077+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:05.414326+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:06.414515+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:07.414670+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:08.414885+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:09.415038+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:10.415248+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:11.415436+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:12.415677+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:13.415837+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:14.415980+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:15.416162+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:16.416334+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:17.416485+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:18.416635+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:19.416808+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:20.416943+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 12804096 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:21.417114+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:22.417299+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:23.857465+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:24.857652+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:25.857803+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:26.858067+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:27.858225+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:28.858417+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:29.858572+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:30.858738+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:31.858918+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:32.859093+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:33.859346+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:34.859708+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:35.859885+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:36.860050+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:37.860235+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:38.860478+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:39.860688+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:40.860885+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:41.861211+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:42.861435+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:43.861627+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:44.861786+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:45.861905+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:46.862040+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:47.862197+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92569600 unmapped: 12795904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:48.862418+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:49.862648+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:50.862985+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:51.863326+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:52.863626+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:53.863767+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:54.863915+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:55.864115+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:56.864335+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:57.864504+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:58.864676+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:59.865040+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:00.865218+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:01.865504+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:02.865709+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:03.865988+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:04.866193+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:05.866426+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:06.866820+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:07.866943+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:08.867106+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:09.867359+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:10.867502+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 12787712 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:11.867687+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:12.868040+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:13.868367+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:14.868616+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:15.868853+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:16.869001+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:17.869165+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:18.869328+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:19.869643+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:20.869975+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:21.870182+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:22.870515+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:23.870823+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:24.871065+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92585984 unmapped: 12779520 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:25.871233+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:26.871401+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:27.871607+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:28.871793+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:29.871932+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:30.872118+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:31.872321+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:32.872543+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:33.872715+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:34.872860+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:35.873003+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:36.873161+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:37.873311+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:38.873462+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:39.873610+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:40.873741+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:41.873827+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:42.873968+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:43.874098+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:44.874236+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:45.874375+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:46.874507+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:47.874676+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:48.874808+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:49.874945+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:50.875106+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:51.875257+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:52.875439+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:53.875618+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:54.875776+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:55.875905+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:56.876036+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92594176 unmapped: 12771328 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:57.876199+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:58.876317+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:59.876458+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:00.876623+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:01.876796+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:02.876977+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:03.877136+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:04.877296+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:05.877508+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:06.877738+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:07.877917+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:08.878075+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:09.878234+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:10.878386+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:11.878538+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:12.878721+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:13.878958+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:14.879095+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:15.879323+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:16.879455+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9735 writes, 34K keys, 9735 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9735 writes, 2412 syncs, 4.04 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1751 writes, 3893 keys, 1751 commit groups, 1.0 writes per commit group, ingest: 1.60 MB, 0.00 MB/s
                                           Interval WAL: 1751 writes, 547 syncs, 3.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:17.879611+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:18.879737+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:19.879919+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92602368 unmapped: 12763136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:20.880151+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92610560 unmapped: 12754944 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:21.880315+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:22.880461+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:23.880585+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:24.880761+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:25.881013+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:26.881148+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:27.881305+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:28.881450+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:29.881617+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:30.881843+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:31.882009+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:32.882165+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:33.882326+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:34.882549+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:35.882698+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:36.882843+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:37.882983+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:38.883128+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:39.883251+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:40.883436+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:41.883598+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:42.883784+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:43.883972+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:44.884130+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:45.884365+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:46.884546+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:47.884694+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:48.884877+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92618752 unmapped: 12746752 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:49.885029+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:50.885177+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:51.885342+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:52.885518+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:53.885699+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:54.885823+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:55.885972+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:56.886106+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:57.886221+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:58.886380+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:59.886556+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:00.886721+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:01.886856+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:02.887028+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:03.887211+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:04.887402+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:05.887550+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:06.887748+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:07.887891+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:08.888082+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:09.888237+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 12738560 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.270599365s of 299.284576416s, submitted: 157
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:10.888497+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92643328 unmapped: 12722176 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:11.888667+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 12713984 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:12.888821+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:13.888956+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:14.889064+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:15.889235+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:16.889403+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:17.889554+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:18.889696+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:19.889821+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:20.889967+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:21.890092+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:22.890256+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:23.890396+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:24.890518+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:25.890650+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:26.890784+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:27.890908+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:28.891031+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:29.891161+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:30.891305+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:31.891476+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:32.891656+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:33.891821+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:34.891953+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:35.892098+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:36.892182+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:37.892307+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:38.892451+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:39.892609+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:40.892786+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:41.892914+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:42.893045+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:43.893180+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:44.893324+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:45.893551+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:46.893729+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:47.893916+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:48.894115+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:49.894296+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:50.894476+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:51.894648+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:52.894853+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:53.895027+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:54.895186+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:55.895493+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:56.895681+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:57.895825+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:58.896007+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:59.896171+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:00.896342+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:01.896482+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:02.896658+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:03.896782+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:04.896934+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:05.897077+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:06.897220+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:07.897344+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:08.897484+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:09.897610+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93945856 unmapped: 11419648 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:10.897764+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:11.897908+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:12.898055+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:13.898237+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:14.898367+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:15.898556+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:16.898791+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:17.901087+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:18.902932+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:19.904405+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:20.905679+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:21.906776+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:22.907778+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:23.908609+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:24.908766+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:25.909173+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:26.909487+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:27.909652+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:28.910164+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:29.910321+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:30.910668+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:31.910990+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:32.911309+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:33.911520+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:34.911656+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:35.911818+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:36.912068+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:37.912349+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:38.912591+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:39.912742+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:40.912913+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:41.913105+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:42.913327+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:43.913499+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:44.913690+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:45.913878+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:46.914035+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:47.914234+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:48.914380+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:49.914552+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:50.915942+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:51.916750+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:52.916999+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:53.919113+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:54.919469+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:55.925992+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:56.928359+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:57.928814+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:58.930319+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:59.930861+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:00.931110+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:01.931760+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:02.932377+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:03.932840+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:04.933160+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:05.933432+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:06.933628+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:07.934043+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:08.934229+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:09.934469+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:10.934758+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:11.934918+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:12.935151+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:13.935322+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:14.935423+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:15.935669+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:16.935815+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:17.935950+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 05:54:11 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3453185045' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:18.936102+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:19.936226+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:20.936411+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:21.936595+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:22.936900+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:23.937099+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:24.937368+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:25.937654+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:26.937802+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:27.938003+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:28.938166+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:29.938384+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:30.938547+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:31.938699+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:32.938948+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:33.939119+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:34.939308+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93954048 unmapped: 11411456 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:35.939672+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:36.940004+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:37.940168+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:38.940335+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:39.940489+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:40.940629+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:41.940975+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:42.941419+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:43.941605+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:44.941800+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:45.942046+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:46.942325+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:47.943405+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:48.943603+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:49.943726+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:50.943860+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:51.944005+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:52.944211+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:53.944397+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:54.944527+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:55.944648+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:56.944769+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:57.944885+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:58.945047+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:59.945228+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:00.945426+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:01.945569+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:02.945788+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:03.945931+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:04.946062+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93962240 unmapped: 11403264 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:05.946224+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:06.946376+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:07.946509+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:08.946656+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:09.946852+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:10.947049+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:11.947247+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:12.947539+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:13.947670+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:14.947798+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:15.948019+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:16.948209+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:17.948364+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:18.948505+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:19.948624+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:20.948823+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:21.949070+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:22.949325+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:23.949475+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:24.951043+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:25.951324+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:26.951529+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:27.951672+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:28.951818+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:29.951962+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:30.952100+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:31.952255+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:32.952447+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:33.952567+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:34.952688+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:35.952919+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: osd.2 169 heartbeat osd_stat(store_statfs(0x4fa402000/0x0/0x4ffc00000, data 0x117b7af/0x126c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:36.953064+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:11 compute-0 ceph-osd[91343]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:11 compute-0 ceph-osd[91343]: bluestore.MempoolThread(0x557761ca5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168643 data_alloc: 218103808 data_used: 413696
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93970432 unmapped: 11395072 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:37.953182+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93626368 unmapped: 11739136 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'config show' '{prefix=config show}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:38.953321+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 94003200 unmapped: 11362304 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: tick
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_tickets
Nov 29 05:54:11 compute-0 ceph-osd[91343]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:39.953456+0000)
Nov 29 05:54:11 compute-0 ceph-osd[91343]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 11771904 heap: 105365504 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:11 compute-0 ceph-osd[91343]: do_command 'log dump' '{prefix=log dump}'
Nov 29 05:54:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14891 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:54:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:54:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:54:11 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:54:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 05:54:11 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/648564266' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:54:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14895 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:11 compute-0 ceph-mon[75176]: from='client.14883 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:11 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/4012891697' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 05:54:11 compute-0 ceph-mon[75176]: from='client.14887 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:11 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3453185045' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 05:54:11 compute-0 ceph-mon[75176]: from='client.14891 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:11 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/648564266' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 05:54:11 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14899 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:11 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 05:54:11 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3003276651' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 05:54:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 05:54:12 compute-0 ceph-mgr[75473]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 05:54:12 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14901 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:12 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 05:54:12 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/413715245' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 05:54:12 compute-0 ceph-mon[75176]: from='client.14895 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:12 compute-0 ceph-mon[75176]: from='client.14899 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:12 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3003276651' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 05:54:12 compute-0 ceph-mon[75176]: from='client.14901 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:12 compute-0 ceph-mon[75176]: pgmap v1525: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:12 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/413715245' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 05:54:12 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:54:12 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14909 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:12 compute-0 ceph-93f82912-647c-5e78-b081-707d0a2966d8-mgr-compute-0-csskcz[75469]: 2025-11-29T05:54:12.971+0000 7fa4f8ec8640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 05:54:12 compute-0 ceph-mgr[75473]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 05:54:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 05:54:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604468007' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 05:54:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 05:54:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/423410994' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 05:54:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 05:54:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3942216470' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 05:54:13 compute-0 ceph-mon[75176]: from='client.14909 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:13 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/604468007' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 05:54:13 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/423410994' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 05:54:13 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3942216470' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 05:54:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:54:13.769 163973 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 29 05:54:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:54:13.770 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 29 05:54:13 compute-0 ovn_metadata_agent[163968]: 2025-11-29 05:54:13.770 163973 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 29 05:54:13 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 05:54:13 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326686157' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 05:54:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/468367657' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 05:54:14 compute-0 crontab[292989]: (root) LIST (root)
Nov 29 05:54:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 05:54:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1857430637' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 05:54:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2165627524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 05:54:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2165627524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 05:54:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335514251' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 05:54:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3719354350' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1326686157' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/468367657' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1857430637' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/2165627524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.10:0/2165627524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2335514251' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: pgmap v1526: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:14 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3719354350' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 05:54:14 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 05:54:14 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2147154485' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 05:54:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 05:54:15 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4066747595' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 05:54:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 05:54:15 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3461734485' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:39.806121+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 475136 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:40.806360+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 466944 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:41.806472+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 466944 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:42.806621+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 458752 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:43.806814+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 458752 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:44.806965+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 458752 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:45.807144+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 450560 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:46.807320+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 450560 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:47.807454+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 442368 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:48.807628+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 442368 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:49.807807+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:50.808007+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:51.808237+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78151680 unmapped: 434176 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:52.808533+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 425984 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:53.808716+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78159872 unmapped: 425984 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:54.808926+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 417792 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:55.809141+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 417792 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:56.809376+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78168064 unmapped: 417792 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:57.809568+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 409600 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:58.809751+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 409600 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:59.809891+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 401408 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:00.810125+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78184448 unmapped: 401408 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:01.810299+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 393216 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:02.810499+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 393216 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:03.810658+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 393216 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:04.810866+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 385024 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:05.811066+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78200832 unmapped: 385024 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:06.811238+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:07.811331+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:08.811445+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78209024 unmapped: 376832 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:09.811597+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 368640 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:10.811724+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 368640 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:11.811852+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:12.811984+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:13.812115+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:14.812234+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:15.812323+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:16.812482+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:17.812680+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:18.812834+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:19.812994+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:20.813125+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:21.813256+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78225408 unmapped: 360448 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:22.813408+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:23.813541+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:24.813670+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:25.813855+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:26.814032+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:27.814179+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:28.814333+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:29.814466+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:30.814627+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:31.814748+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:32.814876+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:33.815032+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:34.815155+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:35.815317+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:36.815460+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:37.815590+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:38.815710+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:39.815842+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:40.816050+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:41.816234+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:42.816361+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:43.816477+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:44.816625+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:45.816743+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:46.816925+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:47.817075+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78233600 unmapped: 352256 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:48.817253+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:49.817414+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:50.817607+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:51.817798+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:52.817955+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:53.818095+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:54.818219+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:55.818446+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:56.818675+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:57.818860+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:58.819015+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:59.819186+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:00.819387+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:01.819540+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:02.819657+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:03.819831+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:04.819954+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:05.820355+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:06.820543+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:07.820684+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:08.820815+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:09.820944+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:10.821067+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:11.821209+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:12.821382+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 344064 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:13.821526+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:14.821644+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:15.821770+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:16.821923+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:17.822054+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 335872 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:18.822212+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:19.822410+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:20.822534+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:21.822671+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:22.823784+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:23.823938+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:24.824221+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:25.824390+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:26.824961+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:27.825410+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:28.825729+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:29.825918+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:30.826119+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:31.826315+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:32.826435+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:33.826564+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:34.826741+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:35.826930+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:36.827113+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:37.827253+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:38.827406+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:39.827568+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:40.827707+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:41.827836+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:42.827946+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:43.828059+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:44.828193+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:45.828397+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:46.828564+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:47.828879+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78258176 unmapped: 327680 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:48.829074+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:49.829244+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:50.829339+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:51.829603+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:52.829767+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:53.830006+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:54.830186+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:55.830362+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:56.830614+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:57.830763+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:58.830939+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:59.831183+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:00.831368+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:01.831547+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:02.831701+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:03.831873+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:04.832007+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:05.832149+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:06.832313+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78266368 unmapped: 319488 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:07.832447+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:08.832671+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:09.832823+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:10.832983+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:11.833195+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:12.833347+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:13.833476+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:14.833605+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:15.833724+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:16.833947+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:17.834080+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:18.834318+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:19.834534+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:20.834712+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:21.834902+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:22.835028+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:23.835171+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:24.835337+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 311296 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:25.835505+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 303104 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:26.835670+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:27.835820+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:28.835969+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:29.836100+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:30.836243+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:31.836382+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:32.836656+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:33.836870+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78290944 unmapped: 294912 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:34.837047+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:35.837199+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:36.837418+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:37.837576+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:38.837736+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:39.837854+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:40.838067+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:41.838233+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:42.838415+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:43.838601+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:44.838794+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78299136 unmapped: 286720 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:45.838965+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:46.839116+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:47.839251+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:48.839419+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:49.839577+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:50.839704+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:51.839831+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:52.839963+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:53.840117+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:54.840336+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:55.840534+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:56.840711+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:57.840871+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:58.841023+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:59.841181+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:00.841356+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:01.841527+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:02.841680+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:03.841840+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:04.841998+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 278528 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:05.842215+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:06.842494+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:07.842696+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:08.842821+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:09.842948+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:10.843098+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 270336 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:11.843287+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:12.843459+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:13.843707+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:14.843875+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:15.844045+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:16.844203+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:17.844363+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:18.844518+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78323712 unmapped: 262144 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc ms_handle_reset ms_handle_reset con 0x55909679fc00
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: get_auth_request con 0x559097d03c00 auth_method 0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_configure stats_period=5
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 ms_handle_reset con 0x559097d03400 session 0x5590967283c0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x55909a3ba400
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 ms_handle_reset con 0x5590971ab800 session 0x559097306780
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x55909a3b9000
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:19.844730+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 0 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:20.844973+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 0 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:21.845171+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 0 heap: 78585856 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:22.845368+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:23.845648+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:24.845837+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:25.846143+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:26.846400+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:27.846571+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:28.846759+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:29.846979+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:30.847252+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:31.847552+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:32.847836+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:33.848070+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:34.848328+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1040384 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:35.848481+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:36.848653+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:37.848781+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:38.848944+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:39.849132+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:40.849280+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:41.849481+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:42.849653+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:43.850016+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:44.850375+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:45.850563+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:46.850733+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:47.850884+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:48.851032+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:49.851215+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78602240 unmapped: 1032192 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:50.851377+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:51.851548+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:52.851714+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:53.851900+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:54.852052+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:55.852172+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:56.852332+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:57.852461+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:58.852597+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:59.852764+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:00.852884+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78610432 unmapped: 1024000 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:01.852999+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:02.853109+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:03.853344+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:04.853491+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:05.853664+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:06.853905+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:07.854024+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:08.854176+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:09.854372+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:10.854508+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78618624 unmapped: 1015808 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:11.854620+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:12.854802+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:13.854929+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:14.855113+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:15.855249+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:16.855451+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:17.855600+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:18.855762+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:19.855913+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78626816 unmapped: 1007616 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:20.856093+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 999424 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:21.856250+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 999424 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:22.856468+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78635008 unmapped: 999424 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:23.856623+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:24.856775+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:25.856942+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:26.857134+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:27.857320+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:28.857448+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:29.857631+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:30.857782+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:31.858054+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:32.858256+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:33.858461+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:34.858577+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:35.858757+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:36.858943+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:37.859105+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:38.859296+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:39.859488+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:40.859669+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:41.859879+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:42.860139+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:43.860384+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:44.860583+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:45.860746+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:46.860910+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:47.861058+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:48.861244+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:49.861412+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78643200 unmapped: 991232 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:50.861528+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:51.861719+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:52.861853+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:53.862014+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:54.862174+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:55.862324+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:56.862475+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78651392 unmapped: 983040 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:57.862633+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:58.862784+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:59.862983+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:00.863132+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:01.863259+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:02.863379+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:03.863554+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:04.863696+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:05.863821+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:06.864015+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:07.864171+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:08.864374+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:09.864530+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:10.864670+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:11.864799+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:12.864930+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:13.865122+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:14.865289+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:15.865470+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:16.865647+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:17.865821+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:18.865964+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:19.866182+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:20.866336+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:21.866478+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:22.866600+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:23.866769+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:24.866918+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:25.867066+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:26.867333+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:27.867450+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:28.867594+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:29.867713+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:30.867960+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:31.868143+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:32.868340+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:33.868468+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:34.868607+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:35.868795+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:36.869021+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:37.869170+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:38.869340+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:39.869504+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:40.869679+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78659584 unmapped: 974848 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:41.869899+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:42.870078+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:43.870243+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:44.870416+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:45.870598+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:46.870798+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:47.871011+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:48.871164+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:49.871350+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:50.871507+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:51.871659+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:52.871827+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:53.872013+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:54.872160+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:55.872306+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:56.872484+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:57.872672+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:58.872871+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:59.873056+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:00.873318+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:01.873516+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:02.873664+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78667776 unmapped: 966656 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:03.873822+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:04.874096+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:05.874294+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:06.874475+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:07.874648+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:08.874792+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:09.874928+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:10.875117+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:11.875234+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:12.875351+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:13.875461+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:14.875582+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:15.875716+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:16.875896+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:17.876054+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:18.876200+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:19.876349+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:20.876479+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:21.876706+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:22.876919+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:23.877117+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:24.906695+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:25.906866+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78675968 unmapped: 958464 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:26.907097+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:27.907275+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:28.907435+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:29.907606+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:30.907804+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:31.907986+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:32.908094+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:33.908238+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:34.908390+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:35.908544+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:36.908719+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:37.908891+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:38.909093+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:39.909302+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:40.909454+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:41.909575+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:42.909719+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:43.909905+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:44.910070+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:45.910291+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78684160 unmapped: 950272 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:46.910480+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:47.910645+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:48.910830+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:49.910941+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:50.911090+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:51.911230+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:52.911410+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:53.911582+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:54.911727+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:55.911854+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:56.912000+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:57.912207+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:58.912396+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:59.912542+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:00.912765+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 942080 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:01.912993+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:02.913206+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:03.913436+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:04.913621+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:05.913775+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:06.913934+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:07.914123+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:08.914246+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78700544 unmapped: 933888 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:09.914438+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 925696 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:10.914576+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78708736 unmapped: 925696 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:11.914724+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:12.914965+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:13.915465+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:14.915657+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:15.915839+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:16.916010+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:17.916156+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:18.916314+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:19.916485+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:20.916604+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:21.916787+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:22.917062+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:23.917237+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:24.917396+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:25.917553+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:26.918131+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:27.918347+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:28.918512+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:29.918695+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:30.918834+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:31.918959+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:32.919137+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:33.919336+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:34.919548+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:35.919740+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:36.919993+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:37.920191+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:38.920363+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:39.920583+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:40.920765+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:41.920911+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:42.921036+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:43.921218+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:44.921330+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:45.921475+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:46.921647+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:47.921802+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:48.921962+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:49.922159+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:50.922410+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:51.922556+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78716928 unmapped: 917504 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:52.922721+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:53.922904+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:54.923054+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:55.923181+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:56.923332+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:57.923444+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:58.923582+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:59.923701+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:00.923811+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:01.923955+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:02.924094+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:03.924462+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:04.924627+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:05.924790+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:06.924974+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:07.925163+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:08.925374+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:09.925605+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:10.925789+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:11.925920+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78725120 unmapped: 909312 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 7055 writes, 29K keys, 7055 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7055 writes, 1300 syncs, 5.43 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 278 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.045       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.023       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55909594d1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:12.926061+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:13.926236+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:14.926414+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:15.926538+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:16.926726+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:17.926878+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:18.927031+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:19.927188+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:20.927338+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:21.927467+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:22.927638+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:23.927835+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:24.927974+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78757888 unmapped: 876544 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:25.928140+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:26.928351+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:27.928489+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:28.928644+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:29.928806+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:30.928948+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:31.929123+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:32.929346+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:33.929472+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:34.929602+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:35.929749+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:36.929928+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:37.930093+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:38.930240+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:39.930417+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78774272 unmapped: 860160 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:40.930579+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:41.930741+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:42.930879+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:43.931080+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:44.931210+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:45.931354+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:46.931576+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:47.931824+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:48.931993+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:49.932129+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:50.932323+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:51.932475+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:52.932656+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 851968 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:53.932772+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:54.932932+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:55.933133+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:56.933427+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:57.933631+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:58.933848+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:59.934049+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:00.934221+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:01.934417+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:02.934598+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:03.935626+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:04.935817+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:05.935984+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:06.936167+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:07.936342+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:08.936493+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:09.936801+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78766080 unmapped: 868352 heap: 79634432 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.909240723s of 600.174255371s, submitted: 90
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:10.936946+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 1900544 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858718 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:11.937077+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:12.937314+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:13.937557+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:14.937703+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:15.937864+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:16.938098+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:17.938245+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:18.938415+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:19.938583+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:20.938726+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:21.938873+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:22.939038+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:23.939192+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:24.939340+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:25.939463+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:26.939627+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:27.939809+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:28.939998+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:29.940133+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:30.940338+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:31.940540+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:32.940746+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:33.940969+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:34.941171+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:35.941331+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:36.941530+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:37.941716+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:38.941884+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:39.942040+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:40.942211+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:41.942405+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:42.942580+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:43.942752+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:44.942894+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:45.943053+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:46.943318+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:47.943486+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:48.943679+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:49.943851+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:50.943985+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:51.944129+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:52.944353+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:53.944504+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:54.944642+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:55.944764+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:56.944960+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:57.945148+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:58.945307+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:59.945426+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:00.945553+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:01.945701+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:02.945856+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:03.945982+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:04.946168+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:05.946342+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:06.946562+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:07.946740+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:08.946914+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:09.947172+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78798848 unmapped: 1884160 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:10.947323+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:11.947566+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:12.947726+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:13.947866+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:14.948069+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:15.948215+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:16.948469+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:17.948672+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78815232 unmapped: 1867776 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:18.948878+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:19.949048+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:20.949191+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:21.949327+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:22.949494+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:23.949701+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:24.949835+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:25.949998+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:26.950252+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:27.950561+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:28.952225+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:29.952796+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:30.953358+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:31.953858+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:32.954842+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:33.955747+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:34.956097+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:35.956476+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:36.956900+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:37.957320+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:38.957681+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78823424 unmapped: 1859584 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:39.958005+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:40.958233+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:41.958346+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:42.958556+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:43.958833+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:44.959012+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:45.959178+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:46.959334+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:47.959466+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:48.959699+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:49.959862+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:50.959991+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:51.960140+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:52.960352+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:53.960548+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:54.960758+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:55.960948+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:56.961144+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:57.961365+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:58.961498+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:59.961650+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:00.961833+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:01.962041+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:02.962316+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:03.962496+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78831616 unmapped: 1851392 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:04.962627+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:05.962756+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:06.963023+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:07.963233+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:08.963382+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:09.963537+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:10.963674+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 1843200 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:11.963828+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:12.964043+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:13.964229+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:14.964383+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:15.964574+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:16.964728+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:17.964892+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:18.965098+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:19.965340+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:20.965519+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:21.965717+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:22.965942+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:23.966154+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:24.966350+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:25.966535+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:26.966750+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:27.966964+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:28.967150+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:29.967353+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:30.967623+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:31.967932+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:32.968195+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:33.968412+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:34.968617+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:35.968768+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:36.969006+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:37.969236+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:38.969518+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:39.969688+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:40.969846+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:41.970054+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:42.970245+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:43.970441+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:44.970601+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:45.970750+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:46.970965+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:47.971090+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:48.971247+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:49.971454+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:50.971660+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:51.971853+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:52.972038+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:53.972195+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:54.972334+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:55.972444+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:56.972614+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:57.972726+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:58.972900+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:59.973037+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:00.973198+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:01.973336+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:02.973457+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:03.973631+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:04.973818+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:05.974054+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:06.974347+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:07.974610+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:08.974922+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:09.975133+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:10.975337+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:11.975582+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 1835008 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:12.975814+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:13.976061+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:14.976190+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:15.976427+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:16.976685+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:17.976841+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:18.976974+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:19.977196+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:20.977386+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:21.977613+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:22.977869+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:23.978070+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:24.978583+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:25.978747+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:26.978918+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858646 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:27.979068+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:28.979241+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78856192 unmapped: 1826816 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:29.979332+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x55909a3a6000
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 heartbeat osd_stat(store_statfs(0x4fca46000/0x0/0x4ffc00000, data 0x127c87/0x1d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 1802240 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:30.979507+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 200.354232788s of 200.571792603s, submitted: 90
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78880768 unmapped: 1802240 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:31.979665+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 1777664 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 867538 data_alloc: 218103808 data_used: 233472
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:32.979868+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 123 ms_handle_reset con 0x55909a3a6000 session 0x559099864960
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 1753088 heap: 80683008 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:33.980008+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x559097d2ac00
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 123 heartbeat osd_stat(store_statfs(0x4fca39000/0x0/0x4ffc00000, data 0x12cfa4/0x1e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79003648 unmapped: 18464768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:34.980168+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 124 ms_handle_reset con 0x559097d2ac00 session 0x559099b63a40
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 18456576 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba3a000/0x0/0x4ffc00000, data 0x112cfb3/0x11e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:35.980327+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 18456576 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:36.980506+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79011840 unmapped: 18456576 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986920 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:37.980660+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 18440192 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:38.980795+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x112eb4c/0x11e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79028224 unmapped: 18440192 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 124 heartbeat osd_stat(store_statfs(0x4fba36000/0x0/0x4ffc00000, data 0x112eb4c/0x11e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:39.980943+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:40.981135+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:41.981298+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989894 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:42.981451+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:43.981624+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79036416 unmapped: 18432000 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:44.981788+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79069184 unmapped: 18399232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:45.981939+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:46.982189+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989894 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:47.982366+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:48.982497+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:49.982627+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:50.982790+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:51.982947+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989894 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba33000/0x0/0x4ffc00000, data 0x11305af/0x11ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:52.983089+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79101952 unmapped: 18366464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x559098d57c00
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.313594818s of 22.573022842s, submitted: 39
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:53.983293+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79216640 unmapped: 18251776 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:54.983507+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 10
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 18194432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba2f000/0x0/0x4ffc00000, data 0x113532e/0x11ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:55.983690+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 18194432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:56.983875+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x5590981f0800
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 79364096 unmapped: 18104320 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992562 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:57.984049+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 17047552 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:58.984221+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba23000/0x0/0x4ffc00000, data 0x1140c8a/0x11fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 17047552 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:59.984368+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 80420864 unmapped: 17047552 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:00.984494+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 15917056 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba23000/0x0/0x4ffc00000, data 0x1140c8a/0x11fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:01.984627+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 16089088 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994068 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:02.984739+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 11
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 15966208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:03.984888+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.530145645s of 10.658089638s, submitted: 43
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 15966208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:04.985023+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 15966208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:05.985168+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x11533a0/0x120e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 15925248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:06.985353+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 15818752 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998324 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:07.985491+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 15745024 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:08.985624+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 15745024 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:09.985743+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 15720448 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:10.986185+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fba03000/0x0/0x4ffc00000, data 0x115f605/0x121b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 15687680 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:11.986625+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 14639104 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997130 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:12.987790+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 82944000 unmapped: 14524416 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9fc000/0x0/0x4ffc00000, data 0x11678e1/0x1222000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:13.988796+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.448275566s of 10.000021935s, submitted: 39
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83066880 unmapped: 14401536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:14.989340+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9f3000/0x0/0x4ffc00000, data 0x117085c/0x122b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83091456 unmapped: 14376960 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:15.990037+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83116032 unmapped: 14352384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:16.990304+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 14286848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000216 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:17.990683+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83181568 unmapped: 14286848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:18.990887+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 14278656 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:19.991086+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83214336 unmapped: 14254080 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:20.991251+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9e7000/0x0/0x4ffc00000, data 0x117c65f/0x1237000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 14196736 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:21.991670+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83271680 unmapped: 14196736 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998990 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:22.991981+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 14098432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:23.992200+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 14098432 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.366621971s of 10.501939774s, submitted: 33
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:24.992403+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 14057472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:25.992587+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9e4000/0x0/0x4ffc00000, data 0x1180231/0x123a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 14008320 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:26.992867+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 14008320 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:27.993040+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001710 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:28.993376+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:29.993524+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:30.993712+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9da000/0x0/0x4ffc00000, data 0x118a393/0x1244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:31.993912+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 heartbeat osd_stat(store_statfs(0x4fb9da000/0x0/0x4ffc00000, data 0x118a393/0x1244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 13983744 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:32.995053+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002326 data_alloc: 218103808 data_used: 249856
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 14147584 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:33.995345+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 14139392 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:34.995520+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb9d1000/0x0/0x4ffc00000, data 0x1191054/0x124c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 14106624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:35.995725+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 14106624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:36.995923+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 14106624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.712274551s of 13.102365494s, submitted: 45
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:37.996138+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005064 data_alloc: 218103808 data_used: 258048
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 14057472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:38.996329+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 14057472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:39.996523+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 13959168 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb9c4000/0x0/0x4ffc00000, data 0x119c6f7/0x125a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:40.996734+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 13910016 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:41.996887+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 13811712 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:42.996965+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008394 data_alloc: 218103808 data_used: 258048
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 126 heartbeat osd_stat(store_statfs(0x4fb9c0000/0x0/0x4ffc00000, data 0x11a077b/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 13778944 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:43.997156+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 13778944 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:44.997340+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fb9ba000/0x0/0x4ffc00000, data 0x11a636d/0x1264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 11427840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa801000/0x0/0x4ffc00000, data 0x11bb43e/0x127c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:45.997514+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 11427840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:46.997705+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 12
Nov 29 05:54:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 11370496 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.686246872s of 10.000534058s, submitted: 55
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x559096ee4400
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1160944512' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:47.997932+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1014752 data_alloc: 218103808 data_used: 266240
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86482944 unmapped: 10985472 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:48.998070+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 10928128 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:49.998174+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 10862592 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:50.998316+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7e2000/0x0/0x4ffc00000, data 0x11dabf8/0x129c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7e3000/0x0/0x4ffc00000, data 0x11dab4b/0x129b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 10862592 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:51.998436+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86605824 unmapped: 10862592 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:52.998518+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020594 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 10805248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:53.998625+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7d4000/0x0/0x4ffc00000, data 0x11e9505/0x12aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 10805248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:54.998730+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 10805248 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:55.998854+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 10485760 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:56.998983+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86982656 unmapped: 10485760 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.237507820s of 10.010437965s, submitted: 53
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:57.999088+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1025372 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86966272 unmapped: 10502144 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:58.999233+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86876160 unmapped: 10592256 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa7a7000/0x0/0x4ffc00000, data 0x121746f/0x12d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [0,0,2])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:59.999350+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 10543104 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:00.999497+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa794000/0x0/0x4ffc00000, data 0x1227b64/0x12ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86925312 unmapped: 10543104 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:01.999667+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fa795000/0x0/0x4ffc00000, data 0x1227b32/0x12e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 86949888 unmapped: 10518528 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:02.999795+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034972 data_alloc: 218103808 data_used: 278528
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 10461184 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:03.999942+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 87007232 unmapped: 10461184 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:05.000051+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fa76f000/0x0/0x4ffc00000, data 0x124b864/0x130e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88006656 unmapped: 9461760 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:06.000191+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88137728 unmapped: 9330688 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:07.000392+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 9256960 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:08.000507+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1042542 data_alloc: 218103808 data_used: 274432
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.984232903s of 10.380507469s, submitted: 144
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 88064000 unmapped: 9404416 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:09.000629+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89112576 unmapped: 8355840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:10.000813+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa330000/0x0/0x4ffc00000, data 0x1279979/0x133c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89145344 unmapped: 8323072 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:11.000967+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89219072 unmapped: 8249344 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:12.001135+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa317000/0x0/0x4ffc00000, data 0x12928da/0x1355000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89227264 unmapped: 8241152 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:13.001316+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044072 data_alloc: 218103808 data_used: 274432
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa316000/0x0/0x4ffc00000, data 0x12959b1/0x1358000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89161728 unmapped: 8306688 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:14.001467+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 8167424 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:15.001623+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89300992 unmapped: 8167424 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:16.002153+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89350144 unmapped: 8118272 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:17.002342+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89169920 unmapped: 8298496 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:18.002477+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1053172 data_alloc: 218103808 data_used: 282624
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89169920 unmapped: 8298496 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:19.003196+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.369832993s of 10.755517006s, submitted: 131
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa2e4000/0x0/0x4ffc00000, data 0x12c3c14/0x138a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa2d6000/0x0/0x4ffc00000, data 0x12d2acb/0x1398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89309184 unmapped: 8159232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:20.003324+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89432064 unmapped: 8036352 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:21.003419+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89432064 unmapped: 8036352 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:22.003543+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89489408 unmapped: 7979008 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:23.003681+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1058860 data_alloc: 218103808 data_used: 286720
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 134 heartbeat osd_stat(store_statfs(0x4fa2bd000/0x0/0x4ffc00000, data 0x12ea68a/0x13b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89595904 unmapped: 7872512 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:24.003857+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 89767936 unmapped: 7700480 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:25.141879+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90816512 unmapped: 6651904 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:26.141995+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 6586368 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:27.142130+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa2aa000/0x0/0x4ffc00000, data 0x12fa7c6/0x13c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90431488 unmapped: 7036928 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:28.142248+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061434 data_alloc: 218103808 data_used: 294912
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90464256 unmapped: 7004160 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.862014771s of 10.027328491s, submitted: 53
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:29.142464+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 7020544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:30.142754+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 7020544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:31.142908+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa274000/0x0/0x4ffc00000, data 0x13323fc/0x13fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 7020544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:32.143155+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fa274000/0x0/0x4ffc00000, data 0x13323fc/0x13fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90939392 unmapped: 6529024 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:33.143295+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1069084 data_alloc: 218103808 data_used: 294912
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 6324224 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:34.143432+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91144192 unmapped: 6324224 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:35.143784+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa23b000/0x0/0x4ffc00000, data 0x136a897/0x1433000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90947584 unmapped: 6520832 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:36.143900+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91037696 unmapped: 6430720 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:37.144042+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91095040 unmapped: 6373376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:38.144205+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1072284 data_alloc: 218103808 data_used: 303104
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90783744 unmapped: 6684672 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:39.144419+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fa217000/0x0/0x4ffc00000, data 0x138e5f8/0x1457000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.346149445s of 10.593280792s, submitted: 69
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90882048 unmapped: 6586368 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:40.144595+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90456064 unmapped: 7012352 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:41.144781+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90505216 unmapped: 6963200 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:42.144980+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa207000/0x0/0x4ffc00000, data 0x139b281/0x1466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 6955008 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:43.145220+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1075678 data_alloc: 218103808 data_used: 315392
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 90513408 unmapped: 6955008 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:44.145394+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91627520 unmapped: 5840896 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:45.145541+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 5832704 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:46.145676+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1f3000/0x0/0x4ffc00000, data 0x13b0953/0x147b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91635712 unmapped: 5832704 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:47.145885+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 5726208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:48.146082+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076688 data_alloc: 218103808 data_used: 315392
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 5726208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:49.146387+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1e8000/0x0/0x4ffc00000, data 0x13bd0c1/0x1486000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91742208 unmapped: 5726208 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:50.146551+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.637351036s of 10.712457657s, submitted: 33
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 5578752 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:51.146691+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91889664 unmapped: 5578752 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:52.146841+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91914240 unmapped: 5554176 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:53.146987+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079056 data_alloc: 218103808 data_used: 315392
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1dd000/0x0/0x4ffc00000, data 0x13c7cd4/0x1491000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92119040 unmapped: 5349376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:54.147171+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92119040 unmapped: 5349376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:55.147350+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 5341184 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:56.147554+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92233728 unmapped: 5234688 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:57.147907+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa1b7000/0x0/0x4ffc00000, data 0x13ee245/0x14b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 5185536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:58.148066+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1081868 data_alloc: 218103808 data_used: 315392
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92282880 unmapped: 5185536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:59.148193+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa18c000/0x0/0x4ffc00000, data 0x1418d7a/0x14e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 4890624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:00.148353+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.908507347s of 10.015370369s, submitted: 35
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92577792 unmapped: 4890624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:01.148548+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92536832 unmapped: 4931584 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:02.148698+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:03.148815+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1082444 data_alloc: 218103808 data_used: 315392
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:04.148965+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa179000/0x0/0x4ffc00000, data 0x142b56b/0x14f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:05.149290+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 5332992 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:06.149411+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa179000/0x0/0x4ffc00000, data 0x142b56b/0x14f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa16e000/0x0/0x4ffc00000, data 0x143691e/0x1500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92135424 unmapped: 5332992 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:07.149586+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa16e000/0x0/0x4ffc00000, data 0x143691e/0x1500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92299264 unmapped: 5169152 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:08.149740+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085216 data_alloc: 218103808 data_used: 315392
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92438528 unmapped: 5029888 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:09.149880+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 5693440 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:10.150032+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 91774976 unmapped: 5693440 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:11.150159+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.764957428s of 11.050184250s, submitted: 23
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 ms_handle_reset con 0x559096ee4400 session 0x55909a371a40
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92315648 unmapped: 5152768 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:12.150290+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa153000/0x0/0x4ffc00000, data 0x1451ca7/0x151b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 13
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 5087232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:13.150571+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083152 data_alloc: 218103808 data_used: 315392
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 92381184 unmapped: 5087232 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:14.150760+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93601792 unmapped: 3866624 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:15.150901+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 3801088 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:16.151050+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93667328 unmapped: 3801088 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:17.151194+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 93904896 unmapped: 3563520 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:18.151347+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fa11d000/0x0/0x4ffc00000, data 0x1486bbb/0x1551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1094026 data_alloc: 218103808 data_used: 323584
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94060544 unmapped: 3407872 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:19.151473+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94167040 unmapped: 3301376 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:20.151618+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2981888 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:21.151761+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0d7000/0x0/0x4ffc00000, data 0x14cad4a/0x1597000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.159432411s of 10.446393013s, submitted: 280
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94486528 unmapped: 2981888 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:22.151927+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:23.152103+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94519296 unmapped: 2949120 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101764 data_alloc: 218103808 data_used: 327680
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fa0c2000/0x0/0x4ffc00000, data 0x14e0882/0x15ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:24.152245+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 3235840 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:25.152356+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 3014656 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:26.152498+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94453760 unmapped: 3014656 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:27.152709+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94543872 unmapped: 2924544 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:28.152922+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94617600 unmapped: 2850816 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1102396 data_alloc: 218103808 data_used: 335872
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:29.153075+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa0a0000/0x0/0x4ffc00000, data 0x1500acb/0x15ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:30.153233+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:31.153395+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:32.153622+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 94740480 unmapped: 2727936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.922378540s of 11.010833740s, submitted: 41
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:33.153771+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107168 data_alloc: 218103808 data_used: 335872
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa078000/0x0/0x4ffc00000, data 0x1528089/0x15f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:34.154121+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:35.154342+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:36.154469+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa078000/0x0/0x4ffc00000, data 0x1528089/0x15f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:37.154643+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95182848 unmapped: 2285568 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:38.154836+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95330304 unmapped: 2138112 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1105844 data_alloc: 218103808 data_used: 335872
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:39.155062+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:40.155221+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:41.155378+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 1785856 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa04b000/0x0/0x4ffc00000, data 0x1554fba/0x1623000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:42.155546+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 1671168 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.918478012s of 10.029466629s, submitted: 25
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:43.155674+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 1703936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1110248 data_alloc: 218103808 data_used: 335872
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:44.155868+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 1703936 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:45.156024+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 1654784 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa040000/0x0/0x4ffc00000, data 0x1560189/0x162e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:46.156170+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 1654784 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:47.156321+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 1654784 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:48.156482+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95830016 unmapped: 1638400 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111578 data_alloc: 218103808 data_used: 335872
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:49.156642+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 95944704 unmapped: 1523712 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa013000/0x0/0x4ffc00000, data 0x158d904/0x165b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:50.156831+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97034240 unmapped: 1482752 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fa014000/0x0/0x4ffc00000, data 0x158d933/0x165a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:51.156939+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 2342912 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:52.157072+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96174080 unmapped: 2342912 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.550374985s of 10.174007416s, submitted: 36
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:53.157136+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96141312 unmapped: 2375680 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121358 data_alloc: 218103808 data_used: 335872
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:54.157325+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96305152 unmapped: 2211840 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:55.157458+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96305152 unmapped: 2211840 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:56.157576+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96305152 unmapped: 2211840 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f9fdb000/0x0/0x4ffc00000, data 0x15c4606/0x1692000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:57.157756+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 1343488 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:58.157913+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 1343488 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1122254 data_alloc: 218103808 data_used: 335872
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f9fa4000/0x0/0x4ffc00000, data 0x15fb1f5/0x16ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:59.158081+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97173504 unmapped: 1343488 heap: 98516992 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 140 heartbeat osd_stat(store_statfs(0x4f9fa4000/0x0/0x4ffc00000, data 0x15fb25a/0x16ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:00.158210+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 3039232 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:01.158332+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 3039232 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:02.158464+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96526336 unmapped: 3039232 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:03.158674+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 96698368 unmapped: 2867200 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121988 data_alloc: 218103808 data_used: 335872
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.927287102s of 10.717306137s, submitted: 67
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:04.158825+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97091584 unmapped: 2473984 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 142 heartbeat osd_stat(store_statfs(0x4f9f27000/0x0/0x4ffc00000, data 0x167852e/0x1746000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:05.158934+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97140736 unmapped: 2424832 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:06.159069+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97394688 unmapped: 2170880 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:07.159209+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98451456 unmapped: 1114112 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:08.159416+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98451456 unmapped: 1114112 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1140216 data_alloc: 218103808 data_used: 344064
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:09.159532+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 876544 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:10.159788+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 1556480 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9ee6000/0x0/0x4ffc00000, data 0x16b7576/0x1788000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:11.159970+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98009088 unmapped: 1556480 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:12.160178+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98156544 unmapped: 1409024 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:13.160351+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98213888 unmapped: 1351680 heap: 99565568 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142228 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.739322662s of 10.120580673s, submitted: 129
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:14.160496+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98320384 unmapped: 2293760 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9e71000/0x0/0x4ffc00000, data 0x172a418/0x17fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:15.160702+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9e63000/0x0/0x4ffc00000, data 0x1738cfd/0x180b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:16.160833+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:17.160989+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:18.161123+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 97640448 unmapped: 2973696 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 143 heartbeat osd_stat(store_statfs(0x4f9e60000/0x0/0x4ffc00000, data 0x173b74a/0x180e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148780 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:19.161255+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 1925120 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:20.161477+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98689024 unmapped: 1925120 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:21.161624+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 1703936 heap: 100614144 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:22.161806+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98910208 unmapped: 2752512 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9e04000/0x0/0x4ffc00000, data 0x17943da/0x1868000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:23.161970+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 98975744 unmapped: 2686976 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160660 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:24.162079+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99180544 unmapped: 2482176 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:25.162213+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.834702492s of 11.411822319s, submitted: 75
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 2375680 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:26.162310+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99287040 unmapped: 2375680 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:27.162443+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9dc9000/0x0/0x4ffc00000, data 0x17d0f56/0x18a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99475456 unmapped: 2187264 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:28.162767+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 99606528 unmapped: 2056192 heap: 101662720 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156908 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:29.162897+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 1925120 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d7b000/0x0/0x4ffc00000, data 0x181d32b/0x18f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:30.163162+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 1728512 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:31.163322+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100982784 unmapped: 1728512 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:32.163713+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d69000/0x0/0x4ffc00000, data 0x1831591/0x1905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100990976 unmapped: 1720320 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:33.163861+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100524032 unmapped: 2187264 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d2f000/0x0/0x4ffc00000, data 0x186b11c/0x193f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168314 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:34.164033+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100524032 unmapped: 2187264 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:35.164152+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100524032 unmapped: 2187264 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9d2f000/0x0/0x4ffc00000, data 0x186b11c/0x193f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:36.164335+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.561640739s of 10.917224884s, submitted: 76
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 1925120 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:37.164540+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 1957888 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:38.164688+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 1908736 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177168 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:39.164839+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9cd1000/0x0/0x4ffc00000, data 0x18c9181/0x199d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101122048 unmapped: 1589248 heap: 102711296 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:40.164966+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101900288 unmapped: 1859584 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:41.165146+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101957632 unmapped: 1802240 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c83000/0x0/0x4ffc00000, data 0x19185d2/0x19eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:42.165294+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102285312 unmapped: 1474560 heap: 103759872 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:43.165408+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101187584 unmapped: 3620864 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182486 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:44.165590+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101244928 unmapped: 3563520 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9c20000/0x0/0x4ffc00000, data 0x197b760/0x1a4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:45.165751+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101679104 unmapped: 3129344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:46.165889+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 101679104 unmapped: 3129344 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.754183769s of 10.750842094s, submitted: 94
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:47.166059+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102735872 unmapped: 2072576 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9bf1000/0x0/0x4ffc00000, data 0x19aad7e/0x1a7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:48.166208+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103104512 unmapped: 1703936 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189048 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:49.166400+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103186432 unmapped: 1622016 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:50.166573+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103211008 unmapped: 1597440 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:51.166725+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 1589248 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:52.166872+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 1589248 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b80000/0x0/0x4ffc00000, data 0x1a1a472/0x1aee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:53.167045+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 1589248 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1194272 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:54.167227+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 1933312 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:55.167425+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 1933312 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:56.167587+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 1933312 heap: 104808448 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9b48000/0x0/0x4ffc00000, data 0x1a52e2b/0x1b26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:57.167802+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103055360 unmapped: 2801664 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.477082253s of 10.629286766s, submitted: 73
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:58.167991+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104144896 unmapped: 1712128 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1200826 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9af5000/0x0/0x4ffc00000, data 0x1aa5978/0x1b79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:59.168160+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104153088 unmapped: 1703936 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:00.168369+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 1359872 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9af6000/0x0/0x4ffc00000, data 0x1aa59e0/0x1b78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:01.168508+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 1359872 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:02.168661+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103063552 unmapped: 2793472 heap: 105857024 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:03.168808+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103383040 unmapped: 3522560 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202774 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:04.168954+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103391232 unmapped: 3514368 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:05.169086+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9a8d000/0x0/0x4ffc00000, data 0x1b0df35/0x1be1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 103399424 unmapped: 3506176 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:06.169199+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104857600 unmapped: 2048000 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:07.169341+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 104955904 unmapped: 1949696 heap: 106905600 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.102632523s of 10.059342384s, submitted: 95
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:08.169452+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105545728 unmapped: 2408448 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223360 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:09.169838+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105603072 unmapped: 2351104 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:10.170027+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105611264 unmapped: 2342912 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:11.170143+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f95b0000/0x0/0x4ffc00000, data 0x1bdaf46/0x1cae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105627648 unmapped: 2326528 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:12.170339+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 105889792 unmapped: 2064384 heap: 107954176 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:13.170522+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 2039808 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1224156 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:14.170664+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c1dff5/0x1cf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 2039808 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f956b000/0x0/0x4ffc00000, data 0x1c1dff5/0x1cf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:15.170825+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 1802240 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:16.170992+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107216896 unmapped: 1785856 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:17.171193+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 1703936 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922871590s of 10.191562653s, submitted: 86
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:18.171320+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9532000/0x0/0x4ffc00000, data 0x1c57e35/0x1d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 1531904 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219940 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:19.171451+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 1531904 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f9532000/0x0/0x4ffc00000, data 0x1c57e35/0x1d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,1])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:20.171608+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 1433600 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:21.171760+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 1327104 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:22.171929+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94fe000/0x0/0x4ffc00000, data 0x1c8d4d9/0x1d60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 1327104 heap: 109002752 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:23.172065+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 107692032 unmapped: 2359296 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226856 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94e8000/0x0/0x4ffc00000, data 0x1ca3492/0x1d76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:24.172192+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109150208 unmapped: 901120 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:25.172424+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109150208 unmapped: 901120 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:26.172579+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109150208 unmapped: 901120 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:27.172683+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94c4000/0x0/0x4ffc00000, data 0x1cc71b5/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 548864 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.210618019s of 10.000334740s, submitted: 57
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f94c4000/0x0/0x4ffc00000, data 0x1cc71b5/0x1d9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:28.173064+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f82f7000/0x0/0x4ffc00000, data 0x1cf41ea/0x1dc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x5b3f9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109584384 unmapped: 466944 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237272 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:29.173304+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 109600768 unmapped: 450560 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:30.173459+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 108904448 unmapped: 1146880 heap: 110051328 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:31.173576+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 108994560 unmapped: 2105344 heap: 111099904 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:32.173687+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 heartbeat osd_stat(store_statfs(0x4f7107000/0x0/0x4ffc00000, data 0x1d42634/0x1e17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111362048 unmapped: 786432 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:33.173888+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111575040 unmapped: 573440 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240492 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:34.174000+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:35.174111+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f3000/0x0/0x4ffc00000, data 0x1d559c8/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:36.174213+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:37.174323+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.811102867s of 10.000490189s, submitted: 66
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:38.174468+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f3000/0x0/0x4ffc00000, data 0x1d559c8/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237784 data_alloc: 218103808 data_used: 368640
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:39.174677+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:40.174861+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 1703936 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:41.175063+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f3000/0x0/0x4ffc00000, data 0x1d559c8/0x1e2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:42.175233+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:43.175401+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1236390 data_alloc: 218103808 data_used: 368640
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:44.175632+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110436352 unmapped: 1712128 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 145 heartbeat osd_stat(store_statfs(0x4f70f5000/0x0/0x4ffc00000, data 0x1d55a5c/0x1e29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:45.175765+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:46.175931+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:47.176114+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.261400223s of 10.000162125s, submitted: 21
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:48.176328+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f70f0000/0x0/0x4ffc00000, data 0x1d5755a/0x1e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 1654784 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241964 data_alloc: 218103808 data_used: 376832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:49.176436+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 146 heartbeat osd_stat(store_statfs(0x4f70f0000/0x0/0x4ffc00000, data 0x1d5755a/0x1e2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:50.176570+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:51.176716+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:52.176825+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 1646592 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:53.176928+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1244584 data_alloc: 218103808 data_used: 385024
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:54.177062+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f70ee000/0x0/0x4ffc00000, data 0x1d5929e/0x1e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:55.177210+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:56.177345+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 110534656 unmapped: 1613824 heap: 112148480 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f70ee000/0x0/0x4ffc00000, data 0x1d5929e/0x1e2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:57.177507+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 147 heartbeat osd_stat(store_statfs(0x4f70ed000/0x0/0x4ffc00000, data 0x1d59339/0x1e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111591424 unmapped: 1605632 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.875500679s of 10.000182152s, submitted: 35
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:58.177778+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 1597440 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1247528 data_alloc: 218103808 data_used: 385024
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:59.177959+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 1597440 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:00.178153+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70ec000/0x0/0x4ffc00000, data 0x1d59466/0x1e31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:01.178315+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:02.178470+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:03.178590+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251238 data_alloc: 218103808 data_used: 393216
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:04.178720+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:05.178889+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:06.179027+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70ea000/0x0/0x4ffc00000, data 0x1d5afcb/0x1e34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 1589248 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70ea000/0x0/0x4ffc00000, data 0x1d5afcb/0x1e34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:07.179208+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 1581056 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.932135582s of 10.000720978s, submitted: 29
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:08.179319+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 1581056 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250570 data_alloc: 218103808 data_used: 393216
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:09.179440+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 1581056 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:10.179594+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 14
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 1564672 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:11.179745+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 1564672 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70eb000/0x0/0x4ffc00000, data 0x1d5b05f/0x1e33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:12.179918+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2999 syncs, 3.64 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3872 writes, 13K keys, 3872 commit groups, 1.0 writes per commit group, ingest: 20.11 MB, 0.03 MB/s
                                           Interval WAL: 3872 writes, 1699 syncs, 2.28 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 1564672 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x55909995d800
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:13.180050+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70eb000/0x0/0x4ffc00000, data 0x1d5b196/0x1e33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255762 data_alloc: 218103808 data_used: 393216
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:14.180217+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e6000/0x0/0x4ffc00000, data 0x1d5b3c2/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e6000/0x0/0x4ffc00000, data 0x1d5b3c2/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:15.180338+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:16.180536+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:17.180771+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5b552/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 1548288 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.908580780s of 10.000893593s, submitted: 23
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:18.180882+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 ms_handle_reset con 0x5590971abc00 session 0x559096728f00
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x5590972f6c00
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 1540096 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e9000/0x0/0x4ffc00000, data 0x1d5b5ba/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1254688 data_alloc: 218103808 data_used: 393216
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:19.181010+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 ms_handle_reset con 0x55909a3ba400 session 0x5590999f4000
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x559098d80000
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 1531904 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 ms_handle_reset con 0x55909a3b9000 session 0x559099b630e0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x5590997d8800
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:20.181126+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 1531904 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:21.181303+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:22.181447+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:23.181594+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255686 data_alloc: 218103808 data_used: 393216
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:24.181748+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5b6ec/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 1523712 heap: 113197056 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:25.181809+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 2564096 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:26.181976+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 2564096 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:27.182181+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e7000/0x0/0x4ffc00000, data 0x1d5b817/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.950098038s of 10.003911972s, submitted: 16
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e7000/0x0/0x4ffc00000, data 0x1d5b817/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:28.182298+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258052 data_alloc: 218103808 data_used: 393216
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:29.182435+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:30.182594+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:31.182739+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 2555904 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:32.182858+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e7000/0x0/0x4ffc00000, data 0x1d5b9bb/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 2547712 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:33.183032+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 2547712 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1257758 data_alloc: 218103808 data_used: 393216
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:34.183180+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:35.183312+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:36.183503+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:37.183756+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.925294876s of 10.001269341s, submitted: 25
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bab8/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:38.183956+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256700 data_alloc: 218103808 data_used: 393216
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:39.184129+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bab8/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 2539520 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:40.184285+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:41.184434+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:42.184567+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:43.184680+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e9000/0x0/0x4ffc00000, data 0x1d5bb3e/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1256876 data_alloc: 218103808 data_used: 393216
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:44.184809+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:45.184928+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:46.185049+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111714304 unmapped: 2531328 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:47.185194+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bb82/0x1e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:48.185311+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.725295067s of 10.793285370s, submitted: 20
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258948 data_alloc: 218103808 data_used: 393216
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:49.185471+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:50.185581+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:51.185790+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bcd1/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:52.185913+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111730688 unmapped: 2514944 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:53.186074+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f70e8000/0x0/0x4ffc00000, data 0x1d5bcd1/0x1e36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111747072 unmapped: 2498560 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264654 data_alloc: 218103808 data_used: 393216
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:54.186250+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 2449408 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:55.186440+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 2211840 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:56.186611+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 112033792 unmapped: 2211840 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:57.186783+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 770048 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f7085000/0x0/0x4ffc00000, data 0x1dba787/0x1e96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:58.186930+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.767315865s of 10.010424614s, submitted: 67
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 113557504 unmapped: 688128 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f706e000/0x0/0x4ffc00000, data 0x1dd359e/0x1eb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280744 data_alloc: 218103808 data_used: 397312
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:59.187072+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 113573888 unmapped: 671744 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f701b000/0x0/0x4ffc00000, data 0x1e24c16/0x1f00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:00.187177+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114122752 unmapped: 122880 heap: 114245632 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:01.187307+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 2113536 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:02.187452+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f6fdd000/0x0/0x4ffc00000, data 0x1e671fe/0x1f41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115384320 unmapped: 958464 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:03.187576+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 heartbeat osd_stat(store_statfs(0x4f6fad000/0x0/0x4ffc00000, data 0x1e976c1/0x1f71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 1638400 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278656 data_alloc: 218103808 data_used: 401408
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:04.187718+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 1638400 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:05.187833+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114704384 unmapped: 1638400 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:06.187998+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 1515520 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:07.188163+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 1515520 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:08.188325+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 149 heartbeat osd_stat(store_statfs(0x4f6f78000/0x0/0x4ffc00000, data 0x1ecb3cc/0x1fa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.705419540s of 10.009120941s, submitted: 105
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 1507328 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1285022 data_alloc: 218103808 data_used: 401408
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:09.188457+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 114835456 unmapped: 1507328 heap: 116342784 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:10.188602+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 2146304 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 150 heartbeat osd_stat(store_statfs(0x4f6f78000/0x0/0x4ffc00000, data 0x1ecba62/0x1fa6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:11.188706+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115318784 unmapped: 2072576 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:12.188827+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6f40000/0x0/0x4ffc00000, data 0x1f00b90/0x1fdd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [1])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 15
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 2244608 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:13.188966+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 2154496 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296348 data_alloc: 218103808 data_used: 409600
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:14.189100+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 2138112 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:15.189287+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6ef6000/0x0/0x4ffc00000, data 0x1f4b216/0x2028000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 2138112 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:16.189431+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 2138112 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:17.189577+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 1032192 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:18.189721+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 802816 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1300896 data_alloc: 218103808 data_used: 409600
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:19.189834+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6ebf000/0x0/0x4ffc00000, data 0x1f817d1/0x205f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.160684586s of 10.642349243s, submitted: 176
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 638976 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:20.189953+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f6eac000/0x0/0x4ffc00000, data 0x1f9425f/0x2072000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 581632 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:21.190078+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116809728 unmapped: 581632 heap: 117391360 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:22.190382+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 2433024 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:23.190511+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f6e58000/0x0/0x4ffc00000, data 0x1fe61cd/0x20c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 2433024 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:24.190654+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308970 data_alloc: 218103808 data_used: 417792
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 152 heartbeat osd_stat(store_statfs(0x4f6e58000/0x0/0x4ffc00000, data 0x1fe61cd/0x20c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 2220032 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:25.190786+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 3072000 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:26.190897+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 3022848 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:27.191086+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 2834432 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:28.191214+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 116604928 unmapped: 2883584 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:29.191335+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315368 data_alloc: 218103808 data_used: 425984
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.695703506s of 10.406598091s, submitted: 85
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117743616 unmapped: 1744896 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:30.191447+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 153 heartbeat osd_stat(store_statfs(0x4f6dff000/0x0/0x4ffc00000, data 0x203cc62/0x211e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117915648 unmapped: 1572864 heap: 119488512 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:31.191588+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117923840 unmapped: 2613248 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:32.191716+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 2703360 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f6ddf000/0x0/0x4ffc00000, data 0x205ad14/0x213e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:33.191857+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 3145728 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:34.192003+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335690 data_alloc: 218103808 data_used: 430080
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117391360 unmapped: 3145728 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:35.195016+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117399552 unmapped: 3137536 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:36.195226+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 2678784 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:37.195493+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 1474560 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:38.195624+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 154 heartbeat osd_stat(store_statfs(0x4f6cd8000/0x0/0x4ffc00000, data 0x2163ea1/0x2246000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119136256 unmapped: 1400832 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:39.195743+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340182 data_alloc: 218103808 data_used: 438272
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119455744 unmapped: 1081344 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:40.195853+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.213871956s of 10.749114037s, submitted: 134
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 917504 heap: 120537088 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:41.195987+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 156 ms_handle_reset con 0x55909995d800 session 0x5590972f52c0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119996416 unmapped: 1589248 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:42.196157+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 2244608 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:43.196355+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 16
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119341056 unmapped: 2244608 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:44.196560+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348512 data_alloc: 218103808 data_used: 446464
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 157 heartbeat osd_stat(store_statfs(0x4f6c5b000/0x0/0x4ffc00000, data 0x21de3be/0x22c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 2121728 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:45.196775+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 2023424 heap: 121585664 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:46.196942+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 2072576 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:47.197166+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120758272 unmapped: 1875968 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:48.197383+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 2105344 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:49.197569+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1362664 data_alloc: 218103808 data_used: 462848
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6baa000/0x0/0x4ffc00000, data 0x2286816/0x2372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120537088 unmapped: 2097152 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 159 heartbeat osd_stat(store_statfs(0x4f6baa000/0x0/0x4ffc00000, data 0x2286816/0x2372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:50.197690+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.560856819s of 10.171666145s, submitted: 344
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120545280 unmapped: 2088960 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:51.197857+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6baa000/0x0/0x4ffc00000, data 0x2286845/0x2372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 1810432 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:52.197991+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6b86000/0x0/0x4ffc00000, data 0x22ac1e5/0x2398000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6b77000/0x0/0x4ffc00000, data 0x22ba7b7/0x23a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [0,0,0,2])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 1736704 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:53.198151+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 120897536 unmapped: 1736704 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:54.198321+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1376386 data_alloc: 218103808 data_used: 462848
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121200640 unmapped: 1433600 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6b49000/0x0/0x4ffc00000, data 0x22e8d2a/0x23d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:55.198469+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121028608 unmapped: 1605632 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:56.198585+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121192448 unmapped: 1441792 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:57.198762+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 1327104 heap: 122634240 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:58.198898+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: handle_auth_request added challenge on 0x559098d69400
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 1138688 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:59.199044+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1394826 data_alloc: 218103808 data_used: 462848
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 160 heartbeat osd_stat(store_statfs(0x4f6a9c000/0x0/0x4ffc00000, data 0x2391dee/0x2481000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cdf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122544128 unmapped: 1138688 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:00.199200+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.695484161s of 10.078989983s, submitted: 75
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 17
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 1794048 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:01.199363+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121888768 unmapped: 1794048 heap: 123682816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:02.199509+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 3088384 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:03.199674+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 3088384 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:04.199857+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1399958 data_alloc: 218103808 data_used: 483328
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121643008 unmapped: 3088384 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f6602000/0x0/0x4ffc00000, data 0x2419885/0x250b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:05.200089+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121896960 unmapped: 2834432 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:06.200303+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 2785280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:07.200501+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 2785280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:08.200621+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 121946112 unmapped: 2785280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:09.200761+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1400984 data_alloc: 218103808 data_used: 479232
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 162 heartbeat osd_stat(store_statfs(0x4f6592000/0x0/0x4ffc00000, data 0x248d5aa/0x257c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122208256 unmapped: 2523136 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:10.200869+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.681946754s of 10.010542870s, submitted: 132
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 1458176 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:11.200987+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123273216 unmapped: 1458176 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:12.201071+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122347520 unmapped: 2383872 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:13.201236+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122576896 unmapped: 2154496 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:14.201393+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409928 data_alloc: 218103808 data_used: 487424
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122585088 unmapped: 2146304 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:15.201546+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6548000/0x0/0x4ffc00000, data 0x24d8331/0x25c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:16.201693+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:17.201975+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6548000/0x0/0x4ffc00000, data 0x24d8331/0x25c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:18.202118+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:19.202298+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1407850 data_alloc: 218103808 data_used: 487424
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:20.202461+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:21.202559+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122732544 unmapped: 1998848 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6538000/0x0/0x4ffc00000, data 0x24e7ec8/0x25d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:22.202668+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6538000/0x0/0x4ffc00000, data 0x24e7ec8/0x25d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.926156998s of 12.125965118s, submitted: 33
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 1818624 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:23.202785+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 1818624 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:24.202954+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:25.203091+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:26.203204+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:27.203411+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:28.203544+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:29.203700+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:30.203858+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:31.203981+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:32.204139+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:33.204306+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:34.204484+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:35.204594+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:36.204710+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:37.204947+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:38.205146+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:39.205345+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:40.205542+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:41.205689+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:42.205843+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:43.206051+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:44.206199+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:45.206375+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:46.206561+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 1761280 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:47.206865+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:48.206982+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:49.207176+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408294 data_alloc: 218103808 data_used: 487424
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:50.207358+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:51.207527+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:52.207663+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 122978304 unmapped: 1753088 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f652b000/0x0/0x4ffc00000, data 0x24f552a/0x25e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.994432449s of 29.999633789s, submitted: 1
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:53.207815+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123117568 unmapped: 1613824 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:54.207973+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123117568 unmapped: 1613824 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1408722 data_alloc: 218103808 data_used: 495616
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f651a000/0x0/0x4ffc00000, data 0x2506322/0x25f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:55.208148+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123117568 unmapped: 1613824 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f651a000/0x0/0x4ffc00000, data 0x2506322/0x25f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:56.208373+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123117568 unmapped: 1613824 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f651a000/0x0/0x4ffc00000, data 0x2506322/0x25f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:57.208609+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123174912 unmapped: 1556480 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:58.208737+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123174912 unmapped: 1556480 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64fa000/0x0/0x4ffc00000, data 0x2525d81/0x2614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64fa000/0x0/0x4ffc00000, data 0x2525d81/0x2614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:59.209017+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123174912 unmapped: 1556480 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409298 data_alloc: 218103808 data_used: 495616
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:00.209130+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64fa000/0x0/0x4ffc00000, data 0x2525d81/0x2614000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 1417216 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:01.209247+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 1417216 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:02.209404+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 1417216 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.930094719s of 10.000052452s, submitted: 11
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64c4000/0x0/0x4ffc00000, data 0x255c2b6/0x264a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:03.209521+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123314176 unmapped: 1417216 heap: 124731392 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:04.209641+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 2285568 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1412898 data_alloc: 218103808 data_used: 495616
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:05.209749+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 2277376 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:06.209867+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:07.210081+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:08.210230+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64a6000/0x0/0x4ffc00000, data 0x257a4ae/0x2668000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:09.210332+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414468 data_alloc: 218103808 data_used: 495616
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:10.210432+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:11.210539+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 123543552 unmapped: 2236416 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f64a6000/0x0/0x4ffc00000, data 0x257a4ae/0x2668000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 ms_handle_reset con 0x559098d69400 session 0x55909a3ca1e0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:12.210647+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 1097728 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.509676933s of 10.000202179s, submitted: 215
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f644f000/0x0/0x4ffc00000, data 0x25cf2da/0x26bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:13.210716+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 18
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124846080 unmapped: 933888 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:14.210891+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124846080 unmapped: 933888 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1424324 data_alloc: 218103808 data_used: 495616
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:15.211004+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124616704 unmapped: 1163264 heap: 125779968 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:16.211139+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124780544 unmapped: 2048000 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6438000/0x0/0x4ffc00000, data 0x25e6322/0x26d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:17.211373+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 124780544 unmapped: 2048000 heap: 126828544 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:18.211485+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125829120 unmapped: 2048000 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:19.211580+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125870080 unmapped: 2007040 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f6411000/0x0/0x4ffc00000, data 0x260da12/0x26fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426000 data_alloc: 218103808 data_used: 495616
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:20.211706+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125599744 unmapped: 2277376 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f63ec000/0x0/0x4ffc00000, data 0x2633d5a/0x2722000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:21.211842+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125771776 unmapped: 2105344 heap: 127877120 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:22.211975+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 3342336 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.760641098s of 10.020527840s, submitted: 37
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:23.212113+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 3694592 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 heartbeat osd_stat(store_statfs(0x4f63b9000/0x0/0x4ffc00000, data 0x2666b72/0x2755000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:24.212213+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 3457024 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435388 data_alloc: 218103808 data_used: 503808
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:25.212327+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125468672 unmapped: 3457024 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:26.212453+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125476864 unmapped: 3448832 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:27.212599+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f635c000/0x0/0x4ffc00000, data 0x26c1d9f/0x27b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,4])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:28.212708+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:29.212834+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437270 data_alloc: 218103808 data_used: 503808
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:30.212962+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:31.213100+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:32.213298+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 3186688 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:33.213453+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125837312 unmapped: 3088384 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6324000/0x0/0x4ffc00000, data 0x26fb273/0x27ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,2,1])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.065333366s of 10.991191864s, submitted: 51
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:34.213642+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125837312 unmapped: 3088384 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1437020 data_alloc: 218103808 data_used: 499712
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 164 heartbeat osd_stat(store_statfs(0x4f6319000/0x0/0x4ffc00000, data 0x27061cb/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x70ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:35.213790+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125837312 unmapped: 3088384 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:36.213954+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 126001152 unmapped: 2924544 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:37.214143+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125968384 unmapped: 2957312 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:38.214246+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125968384 unmapped: 2957312 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:39.214410+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125968384 unmapped: 2957312 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1447224 data_alloc: 218103808 data_used: 512000
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f72cf000/0x0/0x4ffc00000, data 0x27703bb/0x285f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:40.214534+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 3743744 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:41.214672+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 3735552 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:42.214845+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 3735552 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f72cc000/0x0/0x4ffc00000, data 0x2774689/0x2862000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:43.215024+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 126238720 unmapped: 2686976 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:44.215208+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 126238720 unmapped: 2686976 heap: 128925696 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 165 heartbeat osd_stat(store_statfs(0x4f72af000/0x0/0x4ffc00000, data 0x27915e4/0x287f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 165 handle_osd_map epochs [166,166], i have 166, src has [1,166]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.848725796s of 10.614721298s, submitted: 42
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1450028 data_alloc: 218103808 data_used: 520192
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:45.215414+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127303680 unmapped: 2670592 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f72ab000/0x0/0x4ffc00000, data 0x27931fa/0x2882000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:46.215707+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127352832 unmapped: 2621440 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:47.216039+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127352832 unmapped: 2621440 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:48.216169+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127410176 unmapped: 2564096 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 166 heartbeat osd_stat(store_statfs(0x4f728d000/0x0/0x4ffc00000, data 0x27b2240/0x28a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:49.216346+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127410176 unmapped: 2564096 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451364 data_alloc: 218103808 data_used: 520192
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:50.216493+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127410176 unmapped: 2564096 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:51.216636+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:52.216807+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:53.216923+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:54.217065+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7289000/0x0/0x4ffc00000, data 0x27b3cc3/0x28a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453538 data_alloc: 218103808 data_used: 528384
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:55.217198+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:56.217303+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:57.217458+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127418368 unmapped: 2555904 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:58.217581+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.579467773s of 13.778797150s, submitted: 57
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:59.217687+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457502 data_alloc: 218103808 data_used: 536576
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f726e000/0x0/0x4ffc00000, data 0x27cea44/0x28c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:00.217883+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:01.218045+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f726e000/0x0/0x4ffc00000, data 0x27cea44/0x28c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:02.218240+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7256000/0x0/0x4ffc00000, data 0x27e6e38/0x28d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:03.218432+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:04.218554+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1457618 data_alloc: 218103808 data_used: 536576
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:05.218675+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:06.218791+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7256000/0x0/0x4ffc00000, data 0x27e6e38/0x28d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:07.218935+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 2605056 heap: 129974272 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:08.219046+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 3473408 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:09.219227+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127549440 unmapped: 3473408 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466250 data_alloc: 218103808 data_used: 536576
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:10.219357+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127754240 unmapped: 3268608 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:11.219484+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.967459679s of 13.195343018s, submitted: 25
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 3063808 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7208000/0x0/0x4ffc00000, data 0x2833179/0x2926000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:12.219655+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:13.219822+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:14.220017+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1463028 data_alloc: 218103808 data_used: 536576
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:15.220164+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:16.220313+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f71c5000/0x0/0x4ffc00000, data 0x2877752/0x2969000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:17.220505+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 3055616 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:18.220667+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128008192 unmapped: 3014656 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:19.220839+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128253952 unmapped: 2768896 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1467576 data_alloc: 218103808 data_used: 536576
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:20.221008+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128270336 unmapped: 2752512 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:21.221220+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f7171000/0x0/0x4ffc00000, data 0x28ca067/0x29bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128286720 unmapped: 2736128 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:22.221411+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.849212646s of 10.941687584s, submitted: 26
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128327680 unmapped: 2695168 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:23.221570+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129499136 unmapped: 1523712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:24.221722+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129499136 unmapped: 1523712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473180 data_alloc: 218103808 data_used: 536576
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:25.221904+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129499136 unmapped: 1523712 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:26.222066+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129564672 unmapped: 1458176 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f711e000/0x0/0x4ffc00000, data 0x291ced7/0x2a10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:27.222236+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129564672 unmapped: 1458176 heap: 131022848 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:28.222357+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129581056 unmapped: 2490368 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:29.222670+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129671168 unmapped: 2400256 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479142 data_alloc: 218103808 data_used: 536576
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70f1000/0x0/0x4ffc00000, data 0x294a66d/0x2a3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:30.222842+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129671168 unmapped: 2400256 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:31.222963+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 2228224 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:32.223081+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129843200 unmapped: 2228224 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.386429787s of 10.458808899s, submitted: 28
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:33.223215+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128794624 unmapped: 3276800 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:34.223339+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128827392 unmapped: 3244032 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e129/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476518 data_alloc: 218103808 data_used: 536576
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:35.223476+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e1f3/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:36.223615+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e1f3/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:37.223776+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:38.223958+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:39.224110+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e1f3/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475670 data_alloc: 218103808 data_used: 536576
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:40.224221+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:41.224367+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:42.224533+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.957421303s of 10.000261307s, submitted: 8
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:43.224632+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:44.224906+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e2bd/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1477438 data_alloc: 218103808 data_used: 536576
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:45.225104+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:46.225245+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:47.225405+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:48.225578+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:49.225708+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70be000/0x0/0x4ffc00000, data 0x297e387/0x2a70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128835584 unmapped: 3235840 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1476748 data_alloc: 218103808 data_used: 536576
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:50.225885+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 3227648 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:51.226054+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70bf000/0x0/0x4ffc00000, data 0x297e3b6/0x2a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 3227648 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:52.226312+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128843776 unmapped: 3227648 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 heartbeat osd_stat(store_statfs(0x4f70bf000/0x0/0x4ffc00000, data 0x297e3b6/0x2a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.911422729s of 10.000315666s, submitted: 13
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:53.226447+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:54.226578+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479864 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:55.226741+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:56.226921+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:57.227105+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:58.227354+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 168 heartbeat osd_stat(store_statfs(0x4f70bb000/0x0/0x4ffc00000, data 0x297ff9c/0x2a72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:59.227509+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1479864 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:00.227685+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128139264 unmapped: 3932160 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _renew_subs
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:01.227805+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 3874816 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:02.227968+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 3874816 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:03.228153+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 3874816 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:04.228302+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128212992 unmapped: 3858432 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482838 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:05.228426+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:06.228599+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:07.228789+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.910813332s of 15.001037598s, submitted: 43
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:08.228939+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:09.229105+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:10.229236+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:11.229372+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:12.229486+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:13.229598+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:14.229698+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:15.231372+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:16.231494+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:17.231680+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:18.231810+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:19.231988+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:20.232114+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:21.232251+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:22.232423+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:23.232575+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:24.232742+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128221184 unmapped: 3850240 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:25.232880+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:26.233056+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:27.233317+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:28.233484+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:29.233631+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:30.233794+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:31.233958+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:32.234114+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:33.234242+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:34.234364+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:35.234486+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:36.234665+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:37.234829+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:38.234949+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:39.235099+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:40.235301+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128229376 unmapped: 3842048 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:41.235471+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:42.235605+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:43.235743+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:44.235887+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:45.236104+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:46.236393+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:47.236682+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:48.236844+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:49.236987+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:50.237158+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:51.237356+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:52.237498+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:53.237893+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:54.238018+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:55.238122+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:56.238353+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128237568 unmapped: 3833856 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:57.238515+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128245760 unmapped: 3825664 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:58.238625+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128245760 unmapped: 3825664 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:59.238754+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128245760 unmapped: 3825664 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:00.238907+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128245760 unmapped: 3825664 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:01.239025+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128262144 unmapped: 3809280 heap: 132071424 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'config diff' '{prefix=config diff}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'config show' '{prefix=config show}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:02.239154+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128212992 unmapped: 4907008 heap: 133120000 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:03.239301+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 5062656 heap: 133120000 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:04.239418+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'log dump' '{prefix=log dump}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 139124736 unmapped: 5038080 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:05.239545+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'perf dump' '{prefix=perf dump}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'perf schema' '{prefix=perf schema}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 15745024 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:06.239673+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 15745024 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b8000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:07.239840+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 15745024 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:08.239970+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 15745024 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:09.240133+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 15745024 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483014 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:10.240255+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 128417792 unmapped: 15745024 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:11.240414+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.502941132s of 63.509536743s, submitted: 1
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 ms_handle_reset con 0x5590981f0800 session 0x55909a424000
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129007616 unmapped: 15155200 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:12.240532+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129007616 unmapped: 15155200 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:13.240636+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Got map version 19
Nov 29 05:54:15 compute-0 ceph-osd[90181]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 15147008 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:14.240748+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 15147008 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:15.240884+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 15147008 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:16.241012+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 15147008 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:17.241157+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 15147008 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:18.241292+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129015808 unmapped: 15147008 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:19.241419+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:20.241535+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:21.241647+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:22.241755+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:23.241910+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:24.242036+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:25.242210+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:26.242379+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:27.242528+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:28.242634+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:29.242747+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:30.242912+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:31.243039+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:32.243172+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:33.243369+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:34.243506+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:35.243686+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:36.243851+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:37.244013+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:38.244193+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:39.244328+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:40.244484+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:41.244609+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:42.244724+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:43.244873+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:44.244986+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:45.245134+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:46.245393+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:47.245677+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:48.245926+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:49.246067+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:50.246252+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:51.246500+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:52.246724+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:53.246882+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:54.247036+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:55.247245+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:56.247452+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:57.247671+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:58.247838+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:59.248001+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:00.248131+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:01.249421+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:02.249559+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:03.249705+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:04.249909+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129024000 unmapped: 15138816 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:05.250233+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:06.250410+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:07.250671+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:08.250817+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:09.250994+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:10.251163+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:11.251360+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:12.251530+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:13.251712+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:14.251873+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:15.252026+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:16.252151+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129032192 unmapped: 15130624 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:17.252310+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:18.252426+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:19.252577+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:20.252699+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:21.252891+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:22.253110+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:23.253326+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:24.253447+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:25.253601+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:26.253720+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:27.253948+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:28.254123+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:29.254310+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:30.254449+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:31.254581+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:32.254715+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129040384 unmapped: 15122432 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:33.254906+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:34.255034+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:35.255196+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:36.255323+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:37.255501+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:38.255619+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:39.255797+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:40.255938+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:41.256372+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:42.256488+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:43.256619+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:44.256736+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:45.256843+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:46.256957+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:47.257182+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:48.257372+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:49.257528+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129048576 unmapped: 15114240 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:50.257666+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:51.257815+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:52.257942+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:53.258195+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:54.258320+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:55.258496+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:56.258635+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:57.258811+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:58.258964+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:59.259228+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:00.259572+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:01.259760+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:02.259879+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:03.260051+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:04.260182+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 15106048 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:05.260302+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129064960 unmapped: 15097856 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:06.262222+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129064960 unmapped: 15097856 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:07.262439+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129064960 unmapped: 15097856 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:08.262698+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129064960 unmapped: 15097856 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:09.262855+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129064960 unmapped: 15097856 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:10.263014+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129064960 unmapped: 15097856 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:11.263155+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129064960 unmapped: 15097856 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:12.263322+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129081344 unmapped: 15081472 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:13.263467+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129081344 unmapped: 15081472 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:14.263608+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129081344 unmapped: 15081472 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:15.263765+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129081344 unmapped: 15081472 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:16.263897+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129081344 unmapped: 15081472 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:17.264078+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129081344 unmapped: 15081472 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:18.264249+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129081344 unmapped: 15081472 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:19.264414+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129081344 unmapped: 15081472 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:20.264535+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129081344 unmapped: 15081472 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:21.264652+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:22.264772+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:23.264905+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:24.265078+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:25.265254+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:26.265451+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:27.265628+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:28.265800+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:29.265924+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:30.266040+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:31.266203+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:32.266426+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:33.266630+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:34.266769+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:35.267173+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:36.267789+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 15073280 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:37.268179+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:38.268816+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:39.269070+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:40.269945+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:41.270290+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:42.270455+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:43.270619+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:44.270778+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:45.270906+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:46.271077+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:47.271289+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:48.271449+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:49.271659+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:50.271801+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:51.271947+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:52.272081+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129097728 unmapped: 15065088 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:53.272321+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 15056896 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:54.272584+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 15056896 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:55.272765+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 15056896 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:56.272916+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 15056896 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:57.273134+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 15056896 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:58.273334+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 15056896 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:59.273500+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 15056896 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:00.273700+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 15056896 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:01.273859+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 15048704 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:02.274006+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 15048704 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:03.274140+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 15048704 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:04.274297+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 15048704 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:05.274446+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 15048704 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:06.274619+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 15048704 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:07.274797+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 15048704 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:08.274922+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129114112 unmapped: 15048704 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:09.275096+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129122304 unmapped: 15040512 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:10.275314+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129122304 unmapped: 15040512 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:11.275439+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129122304 unmapped: 15040512 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:12.275578+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 15032320 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:13.275694+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 15032320 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:14.275801+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 15032320 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:15.275963+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 15032320 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:16.276094+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 15032320 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:17.276244+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 15032320 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:18.276465+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 15032320 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:19.276681+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 15032320 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:20.276881+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 15032320 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:21.277008+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129130496 unmapped: 15032320 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:22.277190+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:23.277332+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:24.277591+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:25.277747+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:26.277913+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:27.278121+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:28.278247+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:29.278449+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:30.278645+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:31.278829+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:32.279001+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:33.279132+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:34.279324+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:35.279495+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:36.279662+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:37.279805+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:38.279940+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:39.280087+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:40.280235+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129138688 unmapped: 15024128 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:41.280371+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:42.280490+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:43.280638+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:44.280788+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:45.280953+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:46.281118+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:47.281292+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:48.281381+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:49.281501+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:50.281668+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:51.281790+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:52.281957+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129146880 unmapped: 15015936 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:53.282151+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:54.282316+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:55.282461+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:56.282575+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:57.282713+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:58.282854+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:59.282969+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:00.283101+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:01.283993+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:02.284108+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:03.284339+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:04.284472+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:05.284594+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:06.284760+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:07.284996+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:08.285158+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129155072 unmapped: 15007744 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:09.285332+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:10.285511+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:11.285686+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:12.285849+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 14K writes, 52K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 4177 syncs, 3.37 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3169 writes, 9859 keys, 3169 commit groups, 1.0 writes per commit group, ingest: 13.28 MB, 0.02 MB/s
                                           Interval WAL: 3169 writes, 1178 syncs, 2.69 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:13.285983+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:14.286126+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:15.286361+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:16.286558+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:17.286739+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:18.286861+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:19.286994+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:20.287187+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2237580056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:21.287377+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:22.287557+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:23.287769+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:24.287917+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129163264 unmapped: 14999552 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:25.288039+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:26.288230+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:27.288434+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:28.288590+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:29.288781+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:30.288894+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:31.289067+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:32.289200+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:33.289307+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:34.289466+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:35.289583+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:36.289748+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:37.289925+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:38.290096+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129171456 unmapped: 14991360 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:39.290322+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14983168 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:40.290442+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:41.290573+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:42.290713+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:43.290853+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:44.290993+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:45.291169+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:46.291319+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:47.291473+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:48.291600+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:49.291738+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:50.291918+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:51.292034+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:52.292177+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:53.292294+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:54.292412+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129187840 unmapped: 14974976 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:55.292555+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 14966784 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:56.292679+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 14966784 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:57.292827+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 14966784 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:58.293002+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 14966784 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:59.293138+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 14966784 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:00.293351+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 14966784 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:01.293993+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 14958592 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:02.294133+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 14958592 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:03.294315+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 14958592 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:04.294438+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 14958592 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:05.294595+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 14958592 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1482134 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:06.294711+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 14958592 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:07.294878+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 14958592 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:08.295000+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 14958592 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:09.295138+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129204224 unmapped: 14958592 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:10.295322+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.270385742s of 299.300354004s, submitted: 201
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 14950400 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:11.295499+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 14950400 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:12.295628+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129294336 unmapped: 14868480 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:13.295765+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129294336 unmapped: 14868480 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:14.295915+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129294336 unmapped: 14868480 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:15.296037+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129294336 unmapped: 14868480 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:16.296162+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129294336 unmapped: 14868480 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:17.296312+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129302528 unmapped: 14860288 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:18.296434+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129302528 unmapped: 14860288 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:19.296642+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129302528 unmapped: 14860288 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:20.296824+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129302528 unmapped: 14860288 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:21.296985+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129302528 unmapped: 14860288 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:22.297133+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129310720 unmapped: 14852096 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:23.297248+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129310720 unmapped: 14852096 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:24.297415+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129310720 unmapped: 14852096 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:25.297551+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129310720 unmapped: 14852096 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:26.297665+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129310720 unmapped: 14852096 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:27.297821+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129310720 unmapped: 14852096 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:28.297945+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129310720 unmapped: 14852096 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:29.298079+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129310720 unmapped: 14852096 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:30.298220+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129318912 unmapped: 14843904 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:31.298351+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129318912 unmapped: 14843904 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:32.298501+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129318912 unmapped: 14843904 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:33.298577+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129318912 unmapped: 14843904 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:34.298701+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129318912 unmapped: 14843904 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:35.298781+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:36.298896+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:37.299041+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:38.299171+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:39.299260+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:40.299389+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:41.299504+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:42.299650+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:43.299764+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:44.299959+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:45.300156+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:46.300325+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:47.300488+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:48.300605+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:49.300715+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:50.300833+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:51.300960+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129327104 unmapped: 14835712 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:52.301090+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:53.301226+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:54.301347+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:55.301479+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:56.301643+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:57.301835+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:58.301966+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:59.302145+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:00.302336+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:01.302455+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:02.302596+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:03.302738+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:04.302919+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 14827520 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:05.303142+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 14819328 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:06.303319+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 14819328 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:07.303488+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 14819328 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:08.303615+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 14819328 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:09.303742+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 14819328 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:10.303906+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129343488 unmapped: 14819328 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:11.304060+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 14811136 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:12.304187+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 14811136 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:13.304324+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 14811136 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:14.304445+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:15.304578+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:16.304702+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:17.304946+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:18.305304+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:19.305579+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:20.305717+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:21.306109+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:22.306366+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:23.306832+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:24.307437+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:25.307926+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:26.308309+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:27.308688+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:28.309033+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:29.309342+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:30.309473+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:31.309642+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:32.309840+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:33.310080+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:34.310222+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:35.310408+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:36.310600+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129359872 unmapped: 14802944 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:37.310756+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:38.310913+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:39.311087+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:40.311287+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:41.311471+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:42.311626+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:43.311766+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:44.311916+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:45.312037+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:46.312167+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:47.312327+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:48.312489+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:49.312670+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:50.313369+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:51.313826+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:52.313984+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 14794752 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:53.314240+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:54.314501+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:55.314952+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:56.315145+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:57.315295+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:58.315694+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:59.315833+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:00.316411+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:01.316561+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:02.317025+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:03.317191+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:04.317447+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:05.317582+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:06.317871+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:07.318033+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:08.318376+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:09.318507+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:10.318744+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:11.318893+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129376256 unmapped: 14786560 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:12.319019+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:13.319144+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:14.319388+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:15.319508+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:16.319659+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:17.319789+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:18.319918+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:19.320054+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:20.320231+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:21.320362+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:22.320508+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:23.320687+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:24.321035+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:25.321175+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:26.321452+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:27.321626+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:28.321864+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129384448 unmapped: 14778368 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:29.322013+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:30.322216+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:31.322358+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:32.322563+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:33.322710+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:34.322924+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:35.323060+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:36.323212+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:37.323482+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:38.323657+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:39.323797+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:40.324001+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:41.324144+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:42.324352+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:43.324516+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:44.324712+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:45.324868+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:46.325073+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129392640 unmapped: 14770176 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:47.325290+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:48.325418+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:49.325545+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:50.325685+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:51.325854+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:52.326028+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:53.326172+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:54.326371+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:55.326525+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:56.326681+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:57.326825+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:58.326963+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 14761984 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:59.327117+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:00.327284+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:01.327412+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:02.327547+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:03.327683+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:04.327819+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:05.327936+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:06.328050+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:07.328210+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:08.328321+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:09.328519+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:10.328793+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129409024 unmapped: 14753792 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:11.328990+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129417216 unmapped: 14745600 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:12.329219+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129417216 unmapped: 14745600 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:13.329405+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129417216 unmapped: 14745600 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:14.329714+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129417216 unmapped: 14745600 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:15.329927+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:16.330082+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:17.330305+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:18.330453+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:19.330582+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:20.330719+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:21.330884+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:22.331055+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:23.331237+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:24.426729+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:25.426855+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:26.427391+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:27.427613+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:28.427825+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:29.428049+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:30.428147+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:31.428248+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:32.428397+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129425408 unmapped: 14737408 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:33.428512+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129433600 unmapped: 14729216 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:34.428627+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129433600 unmapped: 14729216 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:35.428748+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129433600 unmapped: 14729216 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:36.428863+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129433600 unmapped: 14729216 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:37.429025+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129433600 unmapped: 14729216 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:38.429121+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129433600 unmapped: 14729216 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:39.429233+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129433600 unmapped: 14729216 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:40.429329+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129433600 unmapped: 14729216 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: osd.1 169 heartbeat osd_stat(store_statfs(0x4f70b9000/0x0/0x4ffc00000, data 0x29819ff/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [0,2] op hist [])
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:41.429445+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129433600 unmapped: 14729216 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:15 compute-0 ceph-osd[90181]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:15 compute-0 ceph-osd[90181]: bluestore.MempoolThread(0x559095a2bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1481958 data_alloc: 218103808 data_used: 544768
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:42.429583+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'config diff' '{prefix=config diff}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129433600 unmapped: 14729216 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'config show' '{prefix=config show}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:43.429707+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129646592 unmapped: 14516224 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: tick
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_tickets
Nov 29 05:54:15 compute-0 ceph-osd[90181]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:44.429821+0000)
Nov 29 05:54:15 compute-0 ceph-osd[90181]: prioritycache tune_memory target: 4294967296 mapped: 129277952 unmapped: 14884864 heap: 144162816 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:15 compute-0 ceph-osd[90181]: do_command 'log dump' '{prefix=log dump}'
Nov 29 05:54:15 compute-0 rsyslogd[1003]: imjournal from <np0005539482:ceph-osd>: begin to drop messages due to rate-limiting
Nov 29 05:54:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2147154485' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 05:54:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/4066747595' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 05:54:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3461734485' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 05:54:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1160944512' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 05:54:15 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2237580056' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 05:54:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 05:54:15 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148087550' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 05:54:15 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 05:54:15 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2025950275' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 05:54:16 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14946 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:16 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 05:54:16 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/826411137' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 05:54:16 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:16 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14949 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:16 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14951 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:16 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/148087550' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 05:54:16 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2025950275' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 05:54:16 compute-0 ceph-mon[75176]: from='client.14946 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:16 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/826411137' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 05:54:16 compute-0 ceph-mon[75176]: pgmap v1527: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:16 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14953 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14955 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14957 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:17 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14961 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:17 compute-0 ceph-mon[75176]: from='client.14949 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:17 compute-0 ceph-mon[75176]: from='client.14951 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:17 compute-0 ceph-mon[75176]: from='client.14953 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:17 compute-0 ceph-mon[75176]: from='client.14955 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:17 compute-0 ceph-mon[75176]: from='client.14957 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:54:17 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 05:54:17 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97421607' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 05:54:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14965 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:18 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14969 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 05:54:18 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1110767060' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 05:54:18 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:18 compute-0 ceph-mon[75176]: from='client.14961 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:18 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/97421607' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 05:54:18 compute-0 ceph-mon[75176]: from='client.14965 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:18 compute-0 ceph-mon[75176]: from='client.14969 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 05:54:18 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1110767060' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 05:54:18 compute-0 ceph-mon[75176]: pgmap v1528: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:18 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 05:54:18 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/354878995' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 05:54:19 compute-0 podman[293632]: 2025-11-29 05:54:19.01107325 +0000 UTC m=+0.057432592 container health_status 48e3dd7c8367a40700e212f0a800ad187ec4e53393f1d27fac5895583572046c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 29 05:54:19 compute-0 sshd[190545]: drop connection #1 from [45.78.219.216]:41910 on [38.102.83.17]:22 penalty: exceeded LoginGraceTime
Nov 29 05:54:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 05:54:19 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247564348' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 05:54:19 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 05:54:19 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:34.084922+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 925696 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:35.085087+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:36.085299+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 917504 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:37.085418+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:38.085585+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:39.085770+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 909312 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:40.085970+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:41.086147+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 901120 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:42.086354+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:43.086497+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:44.086669+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 892928 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:45.086796+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:46.086962+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71409664 unmapped: 884736 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:47.087146+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:48.087289+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 876544 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:49.087435+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:50.087645+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 868352 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:51.087871+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:52.088129+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 860160 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:53.088522+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 851968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:54.088679+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 851968 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:55.088792+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 843776 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:56.088903+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:57.089039+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 835584 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:58.089202+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:20:59.089385+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:00.089569+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 827392 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:01.089737+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 819200 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:02.089970+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 819200 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:03.090144+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 819200 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:04.090346+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 811008 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:05.090483+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 802816 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:06.090619+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 802816 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:07.090746+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 802816 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:08.090905+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 794624 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:09.091092+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 794624 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:10.091249+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 794624 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:11.091418+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:12.091558+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:13.091746+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:14.091889+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:15.092060+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:16.092204+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 786432 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:17.092342+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:18.092473+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:19.092646+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:20.092772+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:21.092900+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:22.093070+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:23.093201+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:24.093326+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:25.093444+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:26.093586+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:27.093769+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:28.093922+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 778240 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:29.094161+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:30.094313+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:31.094449+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:32.094627+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:33.094764+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:34.094874+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 770048 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:35.094985+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 753664 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:36.095337+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 753664 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:37.095457+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 753664 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:38.095606+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 753664 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:39.095741+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 753664 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:40.095896+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:41.096098+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:42.096333+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:43.096458+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:44.096584+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:45.096693+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:46.096811+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:47.096964+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:48.097096+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:49.097229+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:50.097315+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 745472 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:51.097428+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 737280 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:52.097566+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 737280 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:53.097758+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 737280 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:54.097927+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 737280 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:55.098100+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:56.098355+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:57.098493+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:58.098689+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:21:59.098797+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:00.098939+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:01.099083+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:02.099416+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:03.099544+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:04.099716+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:05.099825+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:06.100030+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:07.100144+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:08.100279+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:09.100427+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:10.100540+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:11.100683+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:12.100900+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:13.101042+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:14.101233+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:15.102783+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:16.102907+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:17.103292+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:18.103557+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:19.103692+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:20.103870+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:21.103993+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:22.104138+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:23.104362+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:24.104691+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 696320 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:25.104851+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:26.105038+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:27.105186+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:28.106179+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:29.106919+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:30.107482+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:31.107833+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:32.108014+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:33.108192+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:34.108509+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:35.108623+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:36.108835+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:37.108953+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:38.109068+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:39.109203+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:40.109342+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:41.109557+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:42.109733+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:43.109872+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:44.109989+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:45.110247+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:46.110448+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:47.110661+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:48.110874+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:49.111086+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:50.111236+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:51.111330+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:52.111478+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:53.111640+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:54.111883+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 655360 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:55.112049+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:56.112171+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:57.112305+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:58.112491+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:22:59.113970+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:00.114193+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:01.114386+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:02.114576+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:03.114686+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:04.114836+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 638976 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:05.114952+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:06.115071+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:07.115208+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:08.115410+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:09.115589+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 720896 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:10.115763+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 712704 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:11.116002+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:12.116229+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:13.116402+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:14.116519+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 688128 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:15.116667+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:16.116823+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:17.116974+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:18.117137+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:19.117330+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:20.117470+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:21.117615+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:22.117807+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:23.117964+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:24.118120+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:25.118254+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:26.118397+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:27.118615+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:28.118779+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:29.118953+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:30.119077+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:31.119206+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:32.119385+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:33.119525+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:34.119807+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:35.120010+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:36.120196+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:37.120348+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:38.120571+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:39.120784+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:40.120983+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:41.121207+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:42.121532+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:43.121777+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:44.122065+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:45.123301+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:46.123448+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:47.123603+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:48.123772+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:49.123905+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:50.124025+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:51.124157+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:52.124322+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:53.124447+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:54.124618+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:55.124764+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:56.124946+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:57.125078+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:58.125232+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:23:59.125367+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:00.125608+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:01.125716+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 671744 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:02.125895+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:03.126016+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:04.126155+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:05.126381+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:06.126633+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:07.127055+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:08.127383+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:09.127652+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:10.127876+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:11.128046+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 663552 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:12.128256+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 647168 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:13.128485+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 647168 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:14.128598+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 647168 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc ms_handle_reset ms_handle_reset con 0x55c4e689dc00
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: get_auth_request con 0x55c4e7e6b400 auth_method 0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_configure stats_period=5
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:15.128712+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:16.128858+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:17.128989+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:18.129112+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:19.129227+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 360448 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 ms_handle_reset con 0x55c4e74b6400 session 0x55c4e6831680
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e72bec00
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:20.129359+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:21.129573+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:22.129788+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:23.129981+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:24.130209+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:25.130397+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:26.130532+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:27.130697+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:28.130858+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:29.131000+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:30.131216+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:31.131446+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:32.131722+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:33.131963+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:34.132148+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:35.132355+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:36.132540+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:37.132740+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:38.132968+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:39.133143+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:40.133370+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:41.133672+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:42.133825+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:43.133999+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:44.134182+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:45.134297+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:46.134391+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:47.134494+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:48.134689+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:49.134919+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:50.135094+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:51.135249+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:52.135477+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:53.135600+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:54.135776+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:55.135898+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:56.136176+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:57.136337+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:58.136536+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:24:59.136707+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 344064 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:00.136866+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:01.137017+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:02.137175+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:03.137334+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:04.137515+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:05.137664+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:06.137899+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:07.138066+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:08.138202+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:09.138322+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:10.138510+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:11.138649+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:12.138808+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:13.138993+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:14.139215+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:15.139310+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:16.139434+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:17.139558+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:18.139672+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:19.139813+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:20.139964+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:21.140117+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:22.140307+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:23.140475+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:24.140626+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:25.140756+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:26.140960+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:27.141134+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:28.141281+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:29.141459+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:30.141602+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:31.141733+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:32.142344+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:33.142479+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:34.142668+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:35.142818+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:36.142964+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:37.143130+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:38.143275+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:39.143452+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:40.143633+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:41.143787+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:42.144031+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:43.144211+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:44.144349+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:45.144481+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:46.144656+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:47.144812+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:48.146048+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:49.147339+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:50.147466+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:51.147598+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:52.147793+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:53.147927+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:54.148095+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:55.148244+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:56.148442+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:57.148595+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:58.148677+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:25:59.148837+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 335872 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:00.149009+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:01.149178+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:02.149353+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:03.149461+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:04.149618+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:05.149770+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:06.150182+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:07.150341+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:08.150480+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:09.150592+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:10.150705+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:11.150816+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:12.150961+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:13.151144+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:14.151291+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:15.151435+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:16.151547+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:17.151738+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:18.152071+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:19.152212+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 319488 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:20.152365+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:21.152498+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:22.152640+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:23.152752+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:24.152891+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:25.153019+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:26.153140+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:27.522051+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:28.522378+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:29.522548+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:30.522699+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:31.522848+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:32.523020+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:33.523152+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:34.523321+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:35.523445+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:36.523587+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:37.523711+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:38.523854+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:39.524004+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 303104 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:40.524142+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:41.524304+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:42.524533+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:43.524701+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:44.524890+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:45.525032+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 286720 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:46.525189+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:47.525336+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:48.525506+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:49.525647+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:50.525826+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:51.526015+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:52.526231+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:53.526346+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:54.526535+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:55.526663+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:56.526803+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:57.526939+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:58.527104+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:26:59.527255+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 278528 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:00.527412+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:01.527553+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:02.527912+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:03.528079+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:04.528259+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:05.528467+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:06.528689+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:07.528841+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:08.529033+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:09.529193+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 262144 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:10.529344+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 253952 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:11.529508+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:12.529695+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:13.529838+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:14.529964+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:15.530107+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:16.530282+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:17.530495+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:18.530752+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:19.531002+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 237568 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:20.531905+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:21.532665+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:22.532891+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:23.533051+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:24.533235+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:25.533417+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:26.533541+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:27.533663+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:28.533814+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:29.533956+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:30.534077+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 221184 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:31.534207+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:32.534367+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:33.534474+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:34.534600+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:35.534747+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:36.534962+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:37.535099+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:38.535328+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:39.535472+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:40.535686+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:41.535822+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:42.535994+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:43.536164+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:44.536351+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:45.536488+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:46.536704+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:47.536829+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:48.537006+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:49.537192+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 212992 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:50.537364+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:51.537545+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:52.538343+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:53.538490+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:54.538743+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:55.538885+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:56.539020+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:57.539215+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:58.539344+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:27:59.539543+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 196608 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:00.539748+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:01.539924+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:02.540204+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:03.540362+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:04.540577+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:05.540698+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:06.540884+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:07.541014+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:08.541158+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:09.541349+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 188416 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:10.541526+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 172032 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:11.541700+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:12.541927+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:13.542105+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:14.542300+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:15.542475+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:16.542673+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:17.542811+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:18.542979+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:19.543176+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:20.543352+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:21.543472+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:22.543646+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:23.543790+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:24.880044+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:25.881249+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 172032 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:26.882032+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 172032 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:27.882308+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 172032 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:28.882557+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 172032 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:29.882780+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:30.882930+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:31.883099+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:32.883332+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:33.883492+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:34.883695+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:35.883890+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:36.884168+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:37.884395+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:38.884584+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:39.884792+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:40.884981+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:41.885159+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:42.885382+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:43.885532+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 155648 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:44.885665+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:45.885799+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:46.886005+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:47.886191+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:48.886349+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:49.886500+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:50.886674+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:51.886844+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:52.887090+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:53.887336+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:54.887540+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:55.888084+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:56.888236+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:57.888433+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:58.888574+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:28:59.888696+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:00.888833+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:01.888975+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:02.889134+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:03.889361+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:04.889560+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:05.889698+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:06.889858+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 131072 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:07.890005+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5780 writes, 24K keys, 5780 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5780 writes, 976 syncs, 5.92 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a57090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55c4e5a571f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 98304 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:08.890163+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 98304 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:09.890314+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 81920 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:10.890446+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 81920 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:11.890608+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 65536 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:12.890786+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 65536 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:13.890950+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 65536 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:14.891113+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:15.891315+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:16.891472+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:17.891668+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:18.891839+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:19.891966+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:20.892092+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:21.892222+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:22.892397+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:23.892523+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:24.892689+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:25.892844+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:26.892976+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:27.893116+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:28.893221+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:29.893338+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:30.893576+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:31.893702+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:32.893886+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:33.894015+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:34.894412+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:35.894626+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:36.894799+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:37.895204+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:38.895410+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:39.895582+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:40.895754+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:41.895931+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:42.896115+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:43.896346+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:44.896533+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:45.896721+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:46.896914+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:47.897058+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:48.897233+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:49.897399+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:50.897570+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:51.897778+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:52.898012+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:53.898175+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:54.898358+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:55.898531+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:56.898723+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:57.898869+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:58.899078+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:29:59.899375+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:00.899562+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:01.899757+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:02.900007+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:03.900176+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:04.900358+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:05.900514+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:06.900674+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:07.900876+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:08.901050+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 163840 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:09.901215+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 147456 heap: 72294400 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.866027832s of 600.168090820s, submitted: 106
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:10.901421+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 139264 heap: 73342976 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:11.901541+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:12.901758+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:13.901939+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:14.902092+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:15.902337+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:16.902526+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:17.902697+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:18.902842+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:19.902987+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:20.903140+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:21.903339+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:22.903573+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:23.903806+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:24.903978+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:25.904127+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:26.904315+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:27.904495+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:28.904746+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:29.904967+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:30.905156+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:31.905319+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:32.905487+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:33.905665+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:34.905857+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:35.905995+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:36.906156+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:37.906355+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:38.906518+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:39.906665+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:40.906830+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:41.906961+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:42.907137+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:43.907341+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:44.907475+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:45.907606+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:46.907790+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:47.907964+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:48.908146+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:49.908357+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:50.908521+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:51.908732+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:52.908910+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:53.909089+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:54.909255+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:55.909445+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:56.909575+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:57.909728+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:58.909856+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:30:59.910016+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:00.910175+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:01.910308+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:02.910460+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:03.910580+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:04.910749+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:05.910882+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:06.911012+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:07.911210+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:08.911340+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:09.911470+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:10.911618+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:11.911759+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:12.911973+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:13.912176+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:14.912389+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:15.912538+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:16.912734+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:17.912970+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:18.913104+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:19.913356+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1130496 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:20.913504+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:21.913674+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:22.913866+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:23.914032+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:24.914197+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:25.914361+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:26.914515+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:27.914692+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:28.914967+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:29.915159+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:30.915377+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:31.915594+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:32.915805+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:33.915953+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:34.916160+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:35.916418+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:36.916595+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:37.916790+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:38.916977+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:39.917177+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:40.917335+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:41.917534+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:42.917685+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:43.917911+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:44.918124+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:45.918329+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:46.918468+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:47.918620+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:48.918828+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:49.918951+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:50.919071+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:51.919232+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:52.919505+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:53.919695+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:54.919830+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:55.919987+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:56.920134+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:57.920384+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:58.920579+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:31:59.920739+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:00.920950+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:01.921314+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:02.921674+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:03.921918+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1122304 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:04.922122+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:05.922398+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:06.922690+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:07.922999+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:08.923230+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:09.923468+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1105920 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:10.923640+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:11.923828+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:12.924103+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:13.924332+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:14.924552+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:15.924823+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:16.925326+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:17.925495+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:18.925684+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:19.926030+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:20.926251+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:21.926427+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:22.926674+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:23.926899+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1097728 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:24.927245+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:25.927396+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:26.927913+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:27.928144+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:28.928412+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:29.928639+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:30.928801+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:31.928968+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:32.929230+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:33.929456+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:34.929683+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:35.929913+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:36.930039+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:37.930346+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:38.930573+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:39.930759+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:40.930939+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:41.931148+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:42.931390+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:43.931593+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1081344 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:44.931791+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:45.931963+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:46.932148+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:47.932311+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:48.932491+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:49.932624+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:50.932754+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:51.932919+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:52.933118+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:53.933329+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:54.933473+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:55.933659+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:56.933891+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:57.934076+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:58.934205+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:32:59.934326+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:00.934485+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:01.934646+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:02.934836+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:03.934970+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:04.935156+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:05.935330+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:06.935515+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:07.935686+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:08.935891+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:09.936076+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:10.936313+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:11.936474+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:12.936693+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:13.936915+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 1064960 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:14.937303+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:15.937480+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:16.937672+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:17.937822+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:18.937983+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:19.938181+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:20.938388+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:21.938623+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:22.938854+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:23.938989+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:24.939142+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:25.939342+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:26.939477+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:27.939624+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871678 data_alloc: 218103808 data_used: 192512
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:28.939793+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:29.939991+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:30.940133+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1048576 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fcab2000/0x0/0x4ffc00000, data 0xb7866/0x16c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 120 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 201.120803833s of 201.376556396s, submitted: 106
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:31.940344+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e74b6400
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1007616 heap: 74391552 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fc2ac000/0x0/0x4ffc00000, data 0x8bafb4/0x972000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:32.940491+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 123 ms_handle_reset con 0x55c4e74b6400 session 0x55c4e957cb40
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938465 data_alloc: 218103808 data_used: 208896
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 17620992 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:33.940631+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e72be400
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 17588224 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fc2a7000/0x0/0x4ffc00000, data 0x8bcb70/0x976000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:34.940788+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 124 ms_handle_reset con 0x55c4e72be400 session 0x55c4e9cd3e00
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:35.940890+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:36.941079+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:37.941315+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975200 data_alloc: 218103808 data_used: 212992
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fbe33000/0x0/0x4ffc00000, data 0xd2e709/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:38.941490+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:39.941641+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:40.941770+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 17424384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fbe33000/0x0/0x4ffc00000, data 0xd2e709/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 124 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:41.941975+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe33000/0x0/0x4ffc00000, data 0xd2e709/0xde9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:42.942159+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977166 data_alloc: 218103808 data_used: 212992
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:43.942335+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:44.942479+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:45.942620+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:46.942782+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:47.942917+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977166 data_alloc: 218103808 data_used: 212992
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:48.943082+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:49.943261+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:50.943448+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:51.943560+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:52.943722+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977166 data_alloc: 218103808 data_used: 212992
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:53.943841+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 17358848 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:54.944053+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 10
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:55.944507+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:56.944747+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:57.945007+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977326 data_alloc: 218103808 data_used: 217088
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:58.945192+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:33:59.945384+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:00.945560+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:01.945739+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 17350656 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:02.945932+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 11
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977326 data_alloc: 218103808 data_used: 217088
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.246669769s of 31.416391373s, submitted: 47
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:03.946054+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:04.946181+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:05.947622+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:06.948241+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:07.948390+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976654 data_alloc: 218103808 data_used: 217088
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:08.948530+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:09.948735+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:10.949115+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:11.949322+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:12.949504+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976670 data_alloc: 218103808 data_used: 217088
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:13.949633+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:14.949970+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:15.950231+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.018905640s of 13.032593727s, submitted: 5
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:16.950532+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:17.950800+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976638 data_alloc: 218103808 data_used: 217088
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:18.951031+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:19.951288+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 17293312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:20.951418+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 17285120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:21.951673+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 17285120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:22.952021+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976622 data_alloc: 218103808 data_used: 217088
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 17285120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:23.952155+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 17285120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:24.952336+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 17285120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:25.952452+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e7211400
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe32000/0x0/0x4ffc00000, data 0xd3016c/0xdec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:26.952671+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:27.952864+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.039279938s of 12.053936005s, submitted: 5
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978390 data_alloc: 218103808 data_used: 217088
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:28.952998+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:29.953113+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:30.953344+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:31.953508+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 heartbeat osd_stat(store_statfs(0x4fbe31000/0x0/0x4ffc00000, data 0xd30207/0xded000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:32.953723+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978390 data_alloc: 218103808 data_used: 217088
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 17276928 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:33.953908+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:34.954027+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0xd31ded/0xdf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:35.954158+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0xd31ded/0xdf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:36.954303+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0xd31ded/0xdf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:37.954454+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982212 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:38.954653+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0xd31ded/0xdf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:39.954840+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:40.955007+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:41.955164+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:42.955362+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982212 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:43.955491+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fbe2d000/0x0/0x4ffc00000, data 0xd31ded/0xdf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:44.955667+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.812093735s of 16.887153625s, submitted: 21
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:45.955775+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 17367040 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:46.955891+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 12
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:47.955997+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984498 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:48.956141+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:49.956278+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fbe2b000/0x0/0x4ffc00000, data 0xd33850/0xdf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:50.956377+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:51.956512+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:52.956842+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984530 data_alloc: 218103808 data_used: 225280
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:53.956961+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:54.957082+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.063793182s of 10.084918976s, submitted: 14
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fbe2b000/0x0/0x4ffc00000, data 0xd33850/0xdf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:55.957215+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:56.957334+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:57.957456+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986138 data_alloc: 218103808 data_used: 229376
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:58.957602+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fbe2a000/0x0/0x4ffc00000, data 0xd338eb/0xdf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:34:59.957760+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:00.957928+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:01.958074+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:02.958224+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987922 data_alloc: 218103808 data_used: 229376
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:03.958310+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 17301504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fbe2a000/0x0/0x4ffc00000, data 0xd338eb/0xdf4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 127 handle_osd_map epochs [128,129], i have 127, src has [1,129]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 127 handle_osd_map epochs [128,128], i have 129, src has [1,128]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:04.958412+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.859433174s of 10.010948181s, submitted: 51
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fbe23000/0x0/0x4ffc00000, data 0xd370d7/0xdfa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 17203200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:05.958522+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 17195008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:06.958671+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 17186816 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fbe20000/0x0/0x4ffc00000, data 0xd38d88/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:07.958799+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 131 heartbeat osd_stat(store_statfs(0x4fbe20000/0x0/0x4ffc00000, data 0xd38d88/0xdfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1003074 data_alloc: 218103808 data_used: 245760
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:08.958948+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:09.959082+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:10.959200+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:11.959321+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:12.959487+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005020 data_alloc: 218103808 data_used: 245760
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fbe1b000/0x0/0x4ffc00000, data 0xd3c509/0xe03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:13.959624+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:14.959785+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:15.959919+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 132 handle_osd_map epochs [133,134], i have 132, src has [1,134]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.698640823s of 10.893690109s, submitted: 71
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:16.960050+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fbe15000/0x0/0x4ffc00000, data 0xd3fae7/0xe08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:17.960216+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011540 data_alloc: 218103808 data_used: 253952
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:18.960313+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:19.960437+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:20.960577+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:21.960740+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fbe15000/0x0/0x4ffc00000, data 0xd3fae7/0xe08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:22.960934+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011556 data_alloc: 218103808 data_used: 253952
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:23.961098+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:24.961296+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75104256 unmapped: 16072704 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:25.961373+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 16171008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:26.961512+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fbe12000/0x0/0x4ffc00000, data 0xd4156a/0xe0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 16171008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:27.961670+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 16171008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1013810 data_alloc: 218103808 data_used: 253952
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:28.961838+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75005952 unmapped: 16171008 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.177145004s of 13.295339584s, submitted: 46
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:29.962007+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75030528 unmapped: 16146432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:30.962218+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fbe11000/0x0/0x4ffc00000, data 0xd416a0/0xe0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 16138240 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:31.962373+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75038720 unmapped: 16138240 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:32.962554+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1018872 data_alloc: 218103808 data_used: 262144
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:33.962732+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:34.962858+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fbe0e000/0x0/0x4ffc00000, data 0xd431eb/0xe0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:35.963010+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd43150/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:36.963149+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:37.963334+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1017302 data_alloc: 218103808 data_used: 262144
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:38.963493+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd43150/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:39.963630+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fbe10000/0x0/0x4ffc00000, data 0xd43150/0xe0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.755588531s of 10.842039108s, submitted: 28
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:40.963787+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:41.963915+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:42.964184+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021300 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:43.964364+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:44.964489+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44bb3/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:45.964657+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44bb3/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:46.964846+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 16130048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44bb3/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:47.965037+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021476 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:48.965210+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:49.965380+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44bb3/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:50.965543+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:51.965733+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:52.965936+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 16121856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021476 data_alloc: 218103808 data_used: 270336
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.351358414s of 13.363707542s, submitted: 12
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:53.966054+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:54.966204+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:55.966360+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:56.966517+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:57.966649+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1022524 data_alloc: 218103808 data_used: 274432
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:58.966798+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:35:59.966975+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:00.967131+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 16113664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:01.967849+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:02.968030+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021658 data_alloc: 218103808 data_used: 274432
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:03.968333+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:04.968556+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0d000/0x0/0x4ffc00000, data 0xd44bb3/0xe11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.000518799s of 12.013453484s, submitted: 4
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:05.968683+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:06.968844+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:07.969001+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023426 data_alloc: 218103808 data_used: 274432
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:08.969164+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:09.969364+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:10.969539+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 16105472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:11.969698+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:12.969869+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 13
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1023074 data_alloc: 218103808 data_used: 274432
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:13.970039+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:14.970181+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fbe0c000/0x0/0x4ffc00000, data 0xd44c4e/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:15.970322+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:16.970472+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 15654912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.645391464s of 11.660141945s, submitted: 135
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:17.970614+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 15646720 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028518 data_alloc: 218103808 data_used: 282624
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:18.970859+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 15646720 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd46834/0xe15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:19.971029+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:20.971181+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:21.971360+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:22.971532+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028828 data_alloc: 218103808 data_used: 282624
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:23.971706+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:24.971846+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76587008 unmapped: 14589952 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd4839f/0xe17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:25.971960+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 14581760 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:26.972113+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76595200 unmapped: 14581760 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.854335785s of 10.010847092s, submitted: 61
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:27.972258+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031432 data_alloc: 218103808 data_used: 290816
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:28.972503+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:29.972638+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:30.972792+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe05000/0x0/0x4ffc00000, data 0xd49d67/0xe19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:31.972955+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:32.973203+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033200 data_alloc: 218103808 data_used: 290816
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:33.973394+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe04000/0x0/0x4ffc00000, data 0xd49e02/0xe1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:34.973600+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:35.973811+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe04000/0x0/0x4ffc00000, data 0xd49e02/0xe1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:36.974009+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:37.974199+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033200 data_alloc: 218103808 data_used: 290816
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:38.974370+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:39.974549+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:40.974673+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.005295753s of 14.014258385s, submitted: 3
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe04000/0x0/0x4ffc00000, data 0xd49e02/0xe1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:41.974828+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:42.975004+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031432 data_alloc: 218103808 data_used: 290816
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:43.975184+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:44.975388+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:45.975572+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:46.975703+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe05000/0x0/0x4ffc00000, data 0xd49d67/0xe19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:47.975886+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031432 data_alloc: 218103808 data_used: 290816
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:48.976026+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76611584 unmapped: 14565376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:49.976210+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe05000/0x0/0x4ffc00000, data 0xd49d67/0xe19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe05000/0x0/0x4ffc00000, data 0xd49d67/0xe19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:50.976371+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:51.976526+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:52.976682+0000)
Nov 29 05:54:19 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3490713332' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1031448 data_alloc: 218103808 data_used: 290816
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:53.976755+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:54.977105+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:55.977258+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe05000/0x0/0x4ffc00000, data 0xd49d67/0xe19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76619776 unmapped: 14557184 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.337747574s of 15.514533043s, submitted: 3
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:56.977374+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76627968 unmapped: 14548992 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:57.977519+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 14540800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033200 data_alloc: 218103808 data_used: 290816
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:58.977634+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76636160 unmapped: 14540800 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fbe04000/0x0/0x4ffc00000, data 0xd49e02/0xe1a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:36:59.977755+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:00.977916+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:01.978045+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:02.978198+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034616 data_alloc: 218103808 data_used: 290816
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:03.978312+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:04.978489+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 140 handle_osd_map epochs [141,142], i have 140, src has [1,142]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fbe03000/0x0/0x4ffc00000, data 0xd49ec8/0xe1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:05.978604+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76677120 unmapped: 14499840 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fbdfc000/0x0/0x4ffc00000, data 0xd4d6b4/0xe21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:06.978721+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.194281578s of 10.365506172s, submitted: 53
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 14491648 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:07.993760+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 14491648 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1041224 data_alloc: 218103808 data_used: 299008
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:08.993955+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76685312 unmapped: 14491648 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4d61c/0xe20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:09.994257+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fbdfd000/0x0/0x4ffc00000, data 0xd4d61c/0xe20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76693504 unmapped: 14483456 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:10.994480+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76701696 unmapped: 14475264 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:11.994721+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:12.995458+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1045606 data_alloc: 218103808 data_used: 311296
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:13.995628+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbdfa000/0x0/0x4ffc00000, data 0xd4f099/0xe23000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:14.995824+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:15.995982+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:16.996108+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:17.996290+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:18.996468+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1043860 data_alloc: 218103808 data_used: 311296
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76742656 unmapped: 14434304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fbdfc000/0x0/0x4ffc00000, data 0xd4efd2/0xe22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:19.996662+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.908679962s of 12.944479942s, submitted: 20
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:20.996795+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:21.996945+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:22.997094+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:23.997259+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048034 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:24.997417+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:25.997598+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:26.997744+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:27.997858+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:28.997975+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048034 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:29.998115+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.214165688s of 10.225300789s, submitted: 15
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:30.998346+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:31.998529+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:32.998694+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:33.998820+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048050 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:34.999051+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:35.999188+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:36.999334+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:37.999478+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:38.999664+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047170 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:39.999827+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:40.999965+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50afe/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:42.000118+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50afe/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76750848 unmapped: 14426112 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:43.000304+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.929623604s of 12.947863579s, submitted: 6
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:44.000510+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050818 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:45.000672+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf5000/0x0/0x4ffc00000, data 0xd50bc4/0xe27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:46.000845+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:47.000952+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50afb/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:48.001119+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:49.001310+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50afb/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049824 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:50.001477+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:51.001590+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76759040 unmapped: 14417920 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:52.001737+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 14409728 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:53.002296+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 14409728 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:54.002471+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048072 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 14409728 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:55.002620+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 14409728 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:56.002772+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76767232 unmapped: 14409728 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:57.002934+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.179697037s of 14.221464157s, submitted: 13
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76775424 unmapped: 14401536 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:58.003119+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:37:59.003332+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051416 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:00.003471+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:01.003600+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50b6b/0xe27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:02.003768+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf7000/0x0/0x4ffc00000, data 0xd50b6b/0xe27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:03.003951+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:04.004088+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051432 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76800000 unmapped: 14376960 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:05.004218+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 14368768 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf8000/0x0/0x4ffc00000, data 0xd50ad0/0xe26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:06.004448+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76808192 unmapped: 14368768 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:07.004568+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.019722939s of 10.055611610s, submitted: 11
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 14344192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:08.004744+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 14344192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:09.004888+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054342 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 14344192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf4000/0x0/0x4ffc00000, data 0xd50bc7/0xe28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:10.005074+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76832768 unmapped: 14344192 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:11.005245+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:12.005381+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:13.005525+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:14.005759+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050778 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:15.005888+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:16.006010+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:17.006187+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:18.006364+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 76840960 unmapped: 14336000 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:19.006566+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050778 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.783651352s of 11.822710991s, submitted: 13
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:20.006733+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:21.006879+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:22.007060+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:23.007237+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:24.007315+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050794 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:25.007428+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:26.007621+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:27.007809+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:28.007954+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:29.008105+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050794 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.000278473s of 10.004323006s, submitted: 1
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:30.008241+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:31.008416+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:32.008600+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:33.008743+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 13287424 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:34.008895+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1050778 data_alloc: 218103808 data_used: 319488
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fbdf9000/0x0/0x4ffc00000, data 0xd50a35/0xe25000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:35.009012+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:36.009175+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:37.009306+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:38.009411+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:39.009624+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fbdf3000/0x0/0x4ffc00000, data 0xd526e4/0xe29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1056832 data_alloc: 218103808 data_used: 327680
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.260903358s of 10.339648247s, submitted: 30
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:40.009787+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:41.009913+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:42.010060+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:43.010243+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:44.010416+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057368 data_alloc: 218103808 data_used: 327680
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fbdf3000/0x0/0x4ffc00000, data 0xd5277e/0xe2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:45.010543+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 13230080 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:46.010688+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77955072 unmapped: 13221888 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:47.010802+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0xd541e1/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 13197312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:48.010961+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77979648 unmapped: 13197312 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:49.011051+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063182 data_alloc: 218103808 data_used: 335872
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 13189120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:50.011185+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 77987840 unmapped: 13189120 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.023447037s of 11.083169937s, submitted: 24
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:51.011333+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 146 heartbeat osd_stat(store_statfs(0x4fbdf2000/0x0/0x4ffc00000, data 0xd54119/0xe2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 13164544 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:52.011516+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 13164544 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:53.011787+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbdf0000/0x0/0x4ffc00000, data 0xd541e2/0xe2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78012416 unmapped: 13164544 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:54.011933+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1065822 data_alloc: 218103808 data_used: 344064
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 13131776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:55.012053+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 13131776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:56.012190+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbded000/0x0/0x4ffc00000, data 0xd55dc6/0xe30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 13131776 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:57.012321+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 13123584 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:58.012425+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 13123584 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 147 heartbeat osd_stat(store_statfs(0x4fbdee000/0x0/0x4ffc00000, data 0xd55cff/0xe2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:38:59.012533+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066020 data_alloc: 218103808 data_used: 344064
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78053376 unmapped: 13123584 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:00.012660+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78061568 unmapped: 13115392 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:01.012849+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 13107200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:02.012956+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fbde9000/0x0/0x4ffc00000, data 0xd57804/0xe33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 13107200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:03.013107+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78069760 unmapped: 13107200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:04.013254+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.151597977s of 13.277581215s, submitted: 46
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071034 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fbdea000/0x0/0x4ffc00000, data 0xd577be/0xe33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 13074432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:05.013452+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 13074432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:06.013611+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 13074432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:07.013759+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 13074432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 7231 writes, 27K keys, 7231 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7231 writes, 1573 syncs, 4.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1451 writes, 3407 keys, 1451 commit groups, 1.0 writes per commit group, ingest: 1.89 MB, 0.00 MB/s
                                           Interval WAL: 1451 writes, 597 syncs, 2.43 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:08.013920+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78102528 unmapped: 13074432 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:09.014037+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e9cefc00
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1071518 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 13058048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:10.014173+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fbde9000/0x0/0x4ffc00000, data 0xd577dc/0xe32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 14
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 13058048 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:11.014246+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 13049856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:12.014426+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 13049856 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:13.014621+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fbdec000/0x0/0x4ffc00000, data 0xd577dc/0xe32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78143488 unmapped: 13033472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:14.014784+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1073400 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.193478584s of 10.242251396s, submitted: 14
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc ms_handle_reset ms_handle_reset con 0x55c4e7e6b400
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1460327761
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: get_auth_request con 0x55c4e9e6dc00 auth_method 0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_configure stats_period=5
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 78782464 unmapped: 12394496 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:15.015037+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 9445376 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:16.015210+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 9314304 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:17.015433+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 9109504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:18.015618+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fabeb000/0x0/0x4ffc00000, data 0xdb7c1e/0xe93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 7462912 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:19.015752+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1092844 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 ms_handle_reset con 0x55c4e72bec00 session 0x55c4e957c1e0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: handle_auth_request added challenge on 0x55c4e7ee1000
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 6963200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:20.015920+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 84213760 unmapped: 6963200 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:21.016054+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faba8000/0x0/0x4ffc00000, data 0xdf8df7/0xed5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 7184384 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:22.016212+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 5799936 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:23.016427+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 5799936 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:24.016584+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1090844 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 5767168 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.244800568s of 10.530930519s, submitted: 82
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:25.016695+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fab44000/0x0/0x4ffc00000, data 0xe5c987/0xf3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 5873664 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:26.016828+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faaff000/0x0/0x4ffc00000, data 0xea1618/0xf7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 5554176 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:27.016984+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 5316608 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:28.017109+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 4898816 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:29.017438+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101184 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87359488 unmapped: 3817472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:30.017621+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87359488 unmapped: 3817472 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:31.017746+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87695360 unmapped: 3481600 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:32.051081+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4faa6d000/0x0/0x4ffc00000, data 0xf36459/0x1011000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 3391488 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:33.051516+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87785472 unmapped: 3391488 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:34.051648+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114800 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 88211456 unmapped: 2965504 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:35.051852+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.375099182s of 10.666891098s, submitted: 91
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa5e4000/0x0/0x4ffc00000, data 0xfaf493/0x108a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 87531520 unmapped: 3645440 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:36.052013+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 2424832 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:37.052185+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2457600 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:38.052345+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 88719360 unmapped: 2457600 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:39.052500+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa585000/0x0/0x4ffc00000, data 0x100cbce/0x10e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1111584 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa585000/0x0/0x4ffc00000, data 0x100cbce/0x10e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 88752128 unmapped: 2424832 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:40.052627+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 2170880 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:41.052734+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89006080 unmapped: 2170880 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:42.052830+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa532000/0x0/0x4ffc00000, data 0x1060023/0x113c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:43.052964+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89014272 unmapped: 2162688 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:44.053080+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125482 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:45.053202+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa4e3000/0x0/0x4ffc00000, data 0x10affc2/0x118b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:46.053316+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89104384 unmapped: 2072576 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:47.053437+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 89423872 unmapped: 1753088 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.628952026s of 11.860681534s, submitted: 77
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:48.053565+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 729088 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:49.053666+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90447872 unmapped: 729088 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130934 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:50.053840+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 352256 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa489000/0x0/0x4ffc00000, data 0x110a37c/0x11e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:51.053970+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 352256 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:52.054130+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90824704 unmapped: 352256 heap: 91176960 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:53.054324+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90685440 unmapped: 1540096 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa469000/0x0/0x4ffc00000, data 0x112984a/0x1205000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:54.054523+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90759168 unmapped: 1466368 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1131374 data_alloc: 218103808 data_used: 356352
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa43c000/0x0/0x4ffc00000, data 0x115775e/0x1232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:55.054655+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90759168 unmapped: 1466368 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:56.055373+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90382336 unmapped: 1843200 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:57.055537+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90382336 unmapped: 1843200 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:58.055668+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90382336 unmapped: 1843200 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.796188354s of 10.946480751s, submitted: 45
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:39:59.055777+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90390528 unmapped: 1835008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128236 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x11577fd/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:00.055903+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90390528 unmapped: 1835008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:01.056046+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:02.056195+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x11577fd/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 heartbeat osd_stat(store_statfs(0x4fa43b000/0x0/0x4ffc00000, data 0x11577fd/0x1233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:03.056360+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:04.056492+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1127370 data_alloc: 218103808 data_used: 352256
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:05.056644+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 149 heartbeat osd_stat(store_statfs(0x4fa438000/0x0/0x4ffc00000, data 0x1159348/0x1235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:06.056821+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:07.056966+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:08.057108+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.159050941s of 10.226642609s, submitted: 30
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:09.057330+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129990 data_alloc: 218103808 data_used: 360448
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:10.057460+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90398720 unmapped: 1826816 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 150 heartbeat osd_stat(store_statfs(0x4fa43a000/0x0/0x4ffc00000, data 0x11592ad/0x1234000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [0,0,1])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:11.057582+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90390528 unmapped: 1835008 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:12.057738+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90415104 unmapped: 1810432 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 15
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:13.057932+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90275840 unmapped: 1949696 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:14.058087+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90292224 unmapped: 1933312 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1137658 data_alloc: 218103808 data_used: 368640
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:15.058406+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90292224 unmapped: 1933312 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa433000/0x0/0x4ffc00000, data 0x115c991/0x123b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:16.058638+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:17.058819+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x115cab7/0x123d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:18.058951+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x115cab7/0x123d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:19.059121+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141322 data_alloc: 218103808 data_used: 368640
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:20.059361+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.526574135s of 12.174523354s, submitted: 146
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:21.059499+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa42d000/0x0/0x4ffc00000, data 0x115e51a/0x1240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:22.059641+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa42d000/0x0/0x4ffc00000, data 0x115e51a/0x1240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:23.059799+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:24.059953+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1145016 data_alloc: 218103808 data_used: 376832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:25.060115+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90210304 unmapped: 2015232 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:26.060312+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90218496 unmapped: 2007040 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:27.060438+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90218496 unmapped: 2007040 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 152 heartbeat osd_stat(store_statfs(0x4fa42d000/0x0/0x4ffc00000, data 0x115e51a/0x1240000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:28.060595+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90218496 unmapped: 2007040 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:29.060715+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90218496 unmapped: 2007040 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1148470 data_alloc: 218103808 data_used: 389120
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:30.060883+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90218496 unmapped: 2007040 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:31.061001+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.807652473s of 10.892947197s, submitted: 47
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:32.061131+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa426000/0x0/0x4ffc00000, data 0x1161b63/0x1246000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:33.061324+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:34.061473+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152508 data_alloc: 218103808 data_used: 389120
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 154 heartbeat osd_stat(store_statfs(0x4fa427000/0x0/0x4ffc00000, data 0x1161bfe/0x1247000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:35.061659+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:36.061839+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:37.062013+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:38.062145+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90234880 unmapped: 1990656 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:39.062308+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fa424000/0x0/0x4ffc00000, data 0x11636ee/0x1249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90243072 unmapped: 1982464 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155816 data_alloc: 218103808 data_used: 397312
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:40.062487+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90243072 unmapped: 1982464 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:41.062616+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90259456 unmapped: 1966080 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 156 ms_handle_reset con 0x55c4e9cefc00 session 0x55c4ea21a3c0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:42.062719+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90480640 unmapped: 1744896 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.805146217s of 10.963012695s, submitted: 200
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:43.062922+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 16
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:44.063082+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 157 heartbeat osd_stat(store_statfs(0x4fa41f000/0x0/0x4ffc00000, data 0x1166d67/0x124f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161610 data_alloc: 218103808 data_used: 397312
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:45.063281+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:46.063452+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:47.063628+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:48.063809+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:49.064002+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 157 handle_osd_map epochs [158,159], i have 157, src has [1,159]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fa420000/0x0/0x4ffc00000, data 0x1166ccc/0x124e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1168756 data_alloc: 218103808 data_used: 409600
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:50.064184+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:51.064372+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 90521600 unmapped: 1703936 heap: 92225536 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa419000/0x0/0x4ffc00000, data 0x116a4e8/0x1254000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:52.064562+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:53.064775+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa415000/0x0/0x4ffc00000, data 0x116bf4b/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:54.064971+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172578 data_alloc: 218103808 data_used: 409600
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:55.065096+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:56.065222+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:57.065384+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 160 heartbeat osd_stat(store_statfs(0x4fa415000/0x0/0x4ffc00000, data 0x116bf4b/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:58.065509+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.826541901s of 16.015865326s, submitted: 77
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:40:59.065659+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170834 data_alloc: 218103808 data_used: 417792
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:00.065776+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 17
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:01.065907+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92626944 unmapped: 647168 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:02.066106+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92635136 unmapped: 638976 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:03.066307+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92635136 unmapped: 638976 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 161 heartbeat osd_stat(store_statfs(0x4fa417000/0x0/0x4ffc00000, data 0x116bf4b/0x1257000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:04.066447+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa413000/0x0/0x4ffc00000, data 0x116db61/0x125a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92651520 unmapped: 622592 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178654 data_alloc: 218103808 data_used: 425984
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:05.066602+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92684288 unmapped: 589824 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x116f787/0x125d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:06.066786+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 573440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:07.066971+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92700672 unmapped: 573440 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:08.067153+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92708864 unmapped: 565248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:09.067361+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92708864 unmapped: 565248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177118 data_alloc: 218103808 data_used: 425984
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:10.067566+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92708864 unmapped: 565248 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.937370300s of 12.083664894s, submitted: 69
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:11.067679+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa40e000/0x0/0x4ffc00000, data 0x11711fa/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:12.067828+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:13.067996+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa40e000/0x0/0x4ffc00000, data 0x11711fa/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:14.068158+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:15.068368+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:16.068502+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:17.068689+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:18.068825+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:19.069020+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:20.069214+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:21.069411+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:22.069610+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:23.069875+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:24.070064+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:25.070204+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:26.070343+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:27.070501+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:28.070690+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:29.070807+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:30.070963+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92725248 unmapped: 548864 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:31.071105+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:32.071299+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:33.071485+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:34.071631+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:35.071730+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:36.071841+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:37.071989+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:38.072188+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:39.072358+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:40.072490+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:41.072652+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:42.072790+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:43.072969+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:44.073155+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:45.073331+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:46.073492+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:47.073654+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:48.073790+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92733440 unmapped: 540672 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:49.074000+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:50.074412+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:51.074621+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:52.074808+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:53.075022+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:54.075217+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:55.075385+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92741632 unmapped: 532480 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:56.075618+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.621803284s of 45.639411926s, submitted: 14
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:57.075755+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x11711fa/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:58.075962+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:41:59.076150+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180768 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:00.076408+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:01.076561+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x11711fa/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92749824 unmapped: 524288 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:02.076736+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:03.076887+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:04.077022+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180592 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:05.077147+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa40f000/0x0/0x4ffc00000, data 0x11711fa/0x125f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:06.077281+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:07.077428+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:08.077556+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:09.077722+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179016 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:10.077843+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:11.077984+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92758016 unmapped: 516096 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.522039413s of 15.537490845s, submitted: 6
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:12.078085+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:13.078211+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 18
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:14.078325+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1178824 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:15.078468+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:16.078623+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:17.078772+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:18.078893+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:19.079048+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92905472 unmapped: 368640 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179000 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:20.079181+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:21.079309+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:22.079407+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:23.079597+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fa410000/0x0/0x4ffc00000, data 0x117115f/0x125e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:24.079713+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179016 data_alloc: 218103808 data_used: 434176
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:25.079844+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92913664 unmapped: 360448 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.762817383s of 13.779428482s, submitted: 137
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 163 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:26.079977+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa40c000/0x0/0x4ffc00000, data 0x1172d45/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:27.080135+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:28.080319+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:29.080467+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183190 data_alloc: 218103808 data_used: 442368
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:30.080614+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa40c000/0x0/0x4ffc00000, data 0x1172d45/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:31.080734+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:32.080907+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa40c000/0x0/0x4ffc00000, data 0x1172d45/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:33.081075+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:34.081239+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183190 data_alloc: 218103808 data_used: 442368
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:35.081387+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:36.081523+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fa40d000/0x0/0x4ffc00000, data 0x1172d45/0x1261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:37.081731+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:38.081868+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92921856 unmapped: 352256 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.566138268s of 13.058055878s, submitted: 22
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:39.081985+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 344064 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1186468 data_alloc: 218103808 data_used: 450560
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:40.082135+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 344064 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:41.082299+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 327680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:42.082466+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa409000/0x0/0x4ffc00000, data 0x11747a8/0x1264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 327680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:43.082633+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92946432 unmapped: 327680 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:44.082807+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 165 heartbeat osd_stat(store_statfs(0x4fa409000/0x0/0x4ffc00000, data 0x11747a8/0x1264000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189266 data_alloc: 218103808 data_used: 450560
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:45.083010+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:46.083194+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:47.083387+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:48.083535+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:49.083666+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa406000/0x0/0x4ffc00000, data 0x11763be/0x1267000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189266 data_alloc: 218103808 data_used: 450560
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:50.083869+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92954624 unmapped: 319488 heap: 93274112 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.003107071s of 12.074946404s, submitted: 41
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:51.084040+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:52.084179+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:53.084341+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:54.084467+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:55.084928+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192240 data_alloc: 218103808 data_used: 450560
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:56.085116+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:57.085246+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:58.085414+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:42:59.085585+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:00.085703+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192400 data_alloc: 218103808 data_used: 454656
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:01.085857+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:02.086057+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:03.086251+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:04.086502+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92971008 unmapped: 1351680 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:05.086623+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192400 data_alloc: 218103808 data_used: 454656
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:06.086766+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:07.086868+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:08.087040+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:09.087171+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:10.087304+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1192400 data_alloc: 218103808 data_used: 454656
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92979200 unmapped: 1343488 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:11.087417+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:12.087816+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.950380325s of 21.958806992s, submitted: 11
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:13.087988+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:14.088107+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:15.088251+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191712 data_alloc: 218103808 data_used: 454656
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:16.088447+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:17.088607+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177edc/0x126b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:18.088756+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:19.088901+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa403000/0x0/0x4ffc00000, data 0x1177edc/0x126b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:20.089082+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193480 data_alloc: 218103808 data_used: 454656
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:21.089293+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:22.089437+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:23.089578+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:24.089755+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:25.089920+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191696 data_alloc: 218103808 data_used: 454656
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:26.090054+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:27.090213+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.996479988s of 15.018195152s, submitted: 6
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:28.090358+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:29.090523+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:30.090699+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191712 data_alloc: 218103808 data_used: 454656
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:31.090866+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:32.091030+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92987392 unmapped: 1335296 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:33.091201+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa404000/0x0/0x4ffc00000, data 0x1177e41/0x126a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92798976 unmapped: 1523712 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:34.091316+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92798976 unmapped: 1523712 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:35.091466+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196034 data_alloc: 218103808 data_used: 454656
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 1392640 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:36.091610+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3e4000/0x0/0x4ffc00000, data 0x11975fd/0x128a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 92930048 unmapped: 1392640 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:37.091774+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93003776 unmapped: 1318912 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.935274124s of 10.001768112s, submitted: 13
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:38.091924+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 1187840 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:39.092058+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93134848 unmapped: 1187840 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:40.092182+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa397000/0x0/0x4ffc00000, data 0x11e3e61/0x12d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204328 data_alloc: 218103808 data_used: 454656
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93208576 unmapped: 1114112 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:41.092302+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 909312 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:42.092509+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93413376 unmapped: 909312 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa36f000/0x0/0x4ffc00000, data 0x120b4e7/0x12ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:43.092675+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93593600 unmapped: 729088 heap: 94322688 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:44.092797+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93732864 unmapped: 1638400 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:45.092915+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1206934 data_alloc: 218103808 data_used: 454656
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93831168 unmapped: 1540096 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:46.093062+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93831168 unmapped: 1540096 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:47.093182+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 93904896 unmapped: 1466368 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.703379631s of 10.000169754s, submitted: 24
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:48.093302+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2fb000/0x0/0x4ffc00000, data 0x12806a1/0x1373000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 94068736 unmapped: 1302528 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:49.093422+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 94093312 unmapped: 1277952 heap: 95371264 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:50.093570+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205894 data_alloc: 218103808 data_used: 454656
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 2187264 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:51.093681+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 2187264 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:52.093846+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 94232576 unmapped: 2187264 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2f7000/0x0/0x4ffc00000, data 0x1284b19/0x1377000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:53.094057+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95444992 unmapped: 974848 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:54.094208+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2df000/0x0/0x4ffc00000, data 0x129c984/0x138f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 925696 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:55.094368+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212656 data_alloc: 218103808 data_used: 462848
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95494144 unmapped: 925696 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:56.094520+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95518720 unmapped: 901120 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:57.094683+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95518720 unmapped: 901120 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:58.094874+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 168 heartbeat osd_stat(store_statfs(0x4fa2b7000/0x0/0x4ffc00000, data 0x12c2c36/0x13b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95199232 unmapped: 1220608 heap: 96419840 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:43:59.095052+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 2113536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:00.095189+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214480 data_alloc: 218103808 data_used: 462848
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95354880 unmapped: 2113536 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _renew_subs
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.013223648s of 13.096959114s, submitted: 33
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:01.095351+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:02.095504+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:03.095658+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:04.095854+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2b3000/0x0/0x4ffc00000, data 0x12c4699/0x13ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:05.095997+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216286 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:06.096207+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:07.096389+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2b3000/0x0/0x4ffc00000, data 0x12c4699/0x13ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95404032 unmapped: 2064384 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:08.096535+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 1998848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:09.096700+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 1998848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:10.096827+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95469568 unmapped: 1998848 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:11.096968+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:12.097097+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:13.097239+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:14.097393+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:15.097513+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:16.097663+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:17.097784+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:18.097941+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:19.098064+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:20.098215+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:21.098394+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:22.098548+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:23.288973+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:24.289155+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:25.289303+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:26.289656+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:27.289782+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:28.289899+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:29.289992+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:30.290137+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:31.290241+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:32.290350+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:33.290483+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:34.290601+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:35.290748+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:36.290879+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:37.290993+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:38.291115+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:39.291319+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:40.291734+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:41.291854+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:42.291962+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:43.292141+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:44.292289+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:45.292473+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:46.292626+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:47.292755+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:48.292882+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:49.293009+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:50.293179+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:51.293315+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:52.293482+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:53.293684+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:54.293797+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:55.293914+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:56.294061+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:57.294223+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:58.294323+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:44:59.294449+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:00.294586+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:01.294716+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:02.294832+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:03.294994+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:04.295133+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95485952 unmapped: 1982464 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:05.295309+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95477760 unmapped: 1990656 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:06.295530+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'config diff' '{prefix=config diff}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'config show' '{prefix=config show}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95330304 unmapped: 2138112 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:07.295671+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95346688 unmapped: 2121728 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:08.295927+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95346688 unmapped: 2121728 heap: 97468416 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:09.296056+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'log dump' '{prefix=log dump}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'perf dump' '{prefix=perf dump}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95420416 unmapped: 13090816 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:10.296201+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'perf schema' '{prefix=perf schema}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217742 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:11.296316+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 70.377067566s of 70.394645691s, submitted: 11
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 ms_handle_reset con 0x55c4e7211400 session 0x55c4e9d043c0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a8000/0x0/0x4ffc00000, data 0x12d004e/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:12.296497+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Got map version 19
Nov 29 05:54:19 compute-0 ceph-osd[89151]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/1460327761,v1:192.168.122.100:6801/1460327761]
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:13.296678+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:14.296819+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:15.296963+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:16.297117+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:17.297332+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:18.297478+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:19.297618+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:20.297767+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:21.297901+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:22.298019+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:23.325653+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:24.325936+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:25.326063+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:26.326188+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:27.326319+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:28.326430+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:29.326552+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:30.326739+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:31.326897+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:32.327053+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:33.327320+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:34.327551+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:35.327708+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:36.327887+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:37.328089+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:38.328250+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:39.328450+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:40.328612+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:41.328846+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:42.329052+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:43.329321+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:44.329583+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:45.329826+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:46.330047+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:47.330318+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:48.330501+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:49.330720+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:50.330890+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:51.331115+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:52.331286+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:53.331472+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:54.331657+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:55.331881+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:56.332080+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:57.332356+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:58.332992+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:45:59.333237+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:00.333510+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:01.333694+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:02.333843+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:03.334005+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:04.334155+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:05.334342+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:06.334501+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:07.334645+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:08.334804+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:09.335040+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:10.335221+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:11.335428+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:12.335589+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:13.335737+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:14.335898+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:15.336101+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:16.336259+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:17.336434+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/354878995' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 05:54:19 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2247564348' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 05:54:19 compute-0 ceph-mon[75176]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 05:54:19 compute-0 ceph-mon[75176]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 05:54:19 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3490713332' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:18.336578+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:19.336702+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:20.336855+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:21.337035+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:22.337240+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:23.337475+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:24.337613+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:25.337758+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:26.337887+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:27.338041+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:28.338236+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:29.338449+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:30.339606+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:31.339760+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:32.339875+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:33.340068+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:34.340243+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:35.340328+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:36.340474+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:37.340630+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:38.340754+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:39.340953+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:40.341172+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:41.341321+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:42.341537+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:43.341747+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:44.341917+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:45.342107+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:46.342336+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:47.342518+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:48.342695+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:49.342886+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:50.343057+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:51.343211+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:52.343355+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:53.343551+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:54.343701+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:55.343889+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:56.344070+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:57.344308+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:58.344463+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:46:59.344622+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:00.344799+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:01.344999+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:02.345154+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:03.345349+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:04.345509+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:05.345668+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:06.345849+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:07.346022+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:08.346200+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:09.346388+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:10.346590+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:11.346804+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:12.347032+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:13.347249+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:14.347449+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:15.347599+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:16.348175+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:17.348327+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:18.348477+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:19.348616+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:20.348796+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:21.348947+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:22.349139+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:23.349333+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:24.349477+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:25.349628+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:26.349801+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:27.349956+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:28.350145+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:29.350347+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95739904 unmapped: 12771328 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:30.350536+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 12763136 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:31.350692+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 12763136 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:32.350933+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 12763136 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:33.351285+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 12763136 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:34.351543+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 12763136 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:35.351688+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 12763136 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:36.352389+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95748096 unmapped: 12763136 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:37.352550+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:38.352706+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:39.352896+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:40.353062+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:41.353314+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:42.353463+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:43.353633+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:44.353764+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:45.353914+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:46.354081+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:47.354201+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:48.354372+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:49.354514+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:50.354677+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:51.354826+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:52.354977+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:53.355127+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:54.355304+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:55.355487+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:56.355626+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:57.355753+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:58.355904+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:47:59.356057+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:00.356200+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95756288 unmapped: 12754944 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:01.356387+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 12746752 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:02.356556+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 12746752 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:03.356719+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 12746752 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:04.356867+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 12746752 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:05.357028+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 12746752 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:06.357161+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 12746752 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:07.357302+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 12746752 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:08.357442+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95764480 unmapped: 12746752 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:09.357562+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95772672 unmapped: 12738560 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:10.357701+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95772672 unmapped: 12738560 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:11.357971+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:12.358114+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:13.358290+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:14.358444+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:15.358593+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:16.358720+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:17.358852+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:18.359037+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:19.359201+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:20.359409+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:21.359548+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:22.359677+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:23.359836+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:24.359969+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:25.360087+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:26.360208+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:27.360340+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:28.360501+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:29.360625+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:30.360750+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:31.360917+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:32.361046+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:33.361200+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:34.361338+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:35.361477+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:36.361596+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:37.361703+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:38.361860+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:39.361972+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:40.362085+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95780864 unmapped: 12730368 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:41.362211+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 12722176 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:42.362350+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 12722176 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:43.362585+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 12722176 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:44.362703+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 12722176 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:45.362822+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 12722176 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:46.363177+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 12722176 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:47.363304+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 12722176 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:48.363465+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95789056 unmapped: 12722176 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:49.363578+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:50.363712+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:51.363851+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:52.364028+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:53.364249+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:54.364618+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:55.364732+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:56.364869+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:57.364979+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:58.365105+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:48:59.365254+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:00.365428+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:01.365623+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:02.365801+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:03.366014+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:04.366187+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:05.366329+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:06.366496+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:07.366646+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 9347 writes, 33K keys, 9347 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 9347 writes, 2355 syncs, 3.97 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2116 writes, 5839 keys, 2116 commit groups, 1.0 writes per commit group, ingest: 7.88 MB, 0.01 MB/s
                                           Interval WAL: 2116 writes, 782 syncs, 2.71 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:08.366811+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:09.366981+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:10.367150+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95797248 unmapped: 12713984 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:11.367324+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95805440 unmapped: 12705792 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:12.367469+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:13.367636+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:14.367750+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:15.367916+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:16.368075+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:17.368207+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:18.368371+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:19.368536+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:20.368743+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:21.368926+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:22.369087+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:23.369234+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:24.369364+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:25.369535+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:26.369682+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:27.369797+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:28.369921+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:29.370053+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:30.370235+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:31.370370+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:32.370486+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:33.370639+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:34.370804+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:35.370923+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:36.371058+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:37.371209+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:38.371332+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:39.371459+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:40.371597+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:41.371722+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:42.371877+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:43.372048+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:44.372184+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95813632 unmapped: 12697600 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:45.372334+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95821824 unmapped: 12689408 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:46.372473+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95821824 unmapped: 12689408 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:47.372587+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95821824 unmapped: 12689408 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:48.372713+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95821824 unmapped: 12689408 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:49.372886+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95821824 unmapped: 12689408 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:50.373001+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95821824 unmapped: 12689408 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:51.373111+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95715328 unmapped: 12795904 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:52.373333+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95715328 unmapped: 12795904 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:53.373481+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:54.373601+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:55.373766+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:56.373888+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:57.374056+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:58.374173+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:49:59.374312+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:00.374421+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:01.374545+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:02.374672+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:03.374817+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:04.374949+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:05.375067+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:06.375182+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216382 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:07.375321+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:08.375478+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:09.375612+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95723520 unmapped: 12787712 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:10.375732+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.275238037s of 299.306976318s, submitted: 148
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:11.375867+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:12.376035+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [1])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95576064 unmapped: 12935168 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:13.376175+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95576064 unmapped: 12935168 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:14.376290+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95576064 unmapped: 12935168 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:15.376409+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95576064 unmapped: 12935168 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:16.376519+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95584256 unmapped: 12926976 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:17.376638+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95584256 unmapped: 12926976 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:18.376782+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:19.376898+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:20.377045+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:21.377179+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:22.377316+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:23.377473+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:24.377608+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:25.377794+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:26.377912+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:27.378073+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:28.378192+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:29.378317+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:30.378443+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:31.378586+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:32.378724+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95592448 unmapped: 12918784 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:33.378922+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:34.379075+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:35.379221+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:36.379347+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:37.379482+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:38.379618+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:39.379747+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:40.379866+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:41.379998+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:42.380129+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:43.380304+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:44.380414+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:45.380568+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:46.380722+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:47.380866+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:48.380977+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:49.381381+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:50.381516+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:51.381647+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:52.381777+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:53.381929+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:54.382051+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:55.382197+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:56.382327+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:57.382464+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:58.382584+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:50:59.382721+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:00.382853+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:01.383057+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:02.383202+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:03.383307+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:04.383502+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:05.383672+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:06.383835+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:07.383968+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:08.384090+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:09.384226+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:10.384440+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:11.384568+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:12.384692+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:13.384910+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:14.385103+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:15.385252+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:16.385401+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:17.385993+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:18.386512+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:19.386690+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:20.386838+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:21.387253+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:22.387674+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:23.388060+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:24.388227+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:25.388745+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:26.389006+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:27.389305+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:28.389538+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:29.389758+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:30.389999+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:31.390376+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:32.390508+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:33.390668+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:34.390827+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:35.390965+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:36.391092+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:37.391207+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:38.391357+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:39.391499+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:40.391628+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:41.391790+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:42.392364+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:43.392556+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:44.392726+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:45.392886+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:46.393015+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:47.393416+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:48.393541+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:49.393670+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:50.393799+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:51.394868+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:52.395337+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:53.395867+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:54.396040+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:55.397035+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:56.399082+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:57.400358+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:58.401944+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:51:59.402176+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:00.403386+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:01.404080+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:02.404901+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:03.405455+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:04.405788+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:05.406452+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:06.406784+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:07.407460+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:08.407821+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:09.408153+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:10.408311+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:11.408490+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:12.408633+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:13.408895+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:14.409083+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:15.409329+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:16.409448+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:17.409585+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:18.409692+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:19.409830+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:20.410090+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:21.410223+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:22.410413+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:23.410761+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:24.411043+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:25.411244+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:26.411531+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:27.411807+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:28.411989+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:29.412198+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:30.412394+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:31.412558+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:32.412751+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:33.412952+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:34.413115+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:35.413334+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:36.413549+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:37.413664+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:38.413896+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:39.414046+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:40.414237+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:41.414403+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:42.414573+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:43.414784+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:44.414903+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:45.415025+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:46.415226+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:47.415370+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:48.415493+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95600640 unmapped: 12910592 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:49.415594+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 12902400 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:50.415703+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 12902400 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:51.415835+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 12902400 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:52.416031+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 12902400 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:53.416177+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 12902400 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:54.416354+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 12902400 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:55.416491+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 12902400 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:56.416637+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 12902400 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:57.416785+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 12902400 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:58.416963+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95608832 unmapped: 12902400 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:52:59.417140+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:00.417298+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:01.417437+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:02.417599+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:03.417757+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:04.417935+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:05.418086+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:06.418213+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:07.418387+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:08.418515+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:09.418711+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:10.418859+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95625216 unmapped: 12886016 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:11.419036+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95633408 unmapped: 12877824 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:12.419178+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95633408 unmapped: 12877824 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:13.419378+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95633408 unmapped: 12877824 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:14.419569+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95633408 unmapped: 12877824 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:15.419760+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95633408 unmapped: 12877824 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:16.419982+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95633408 unmapped: 12877824 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:17.420146+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95633408 unmapped: 12877824 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:18.420304+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95633408 unmapped: 12877824 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:19.420460+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95649792 unmapped: 12861440 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:20.420601+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95649792 unmapped: 12861440 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:21.420747+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:22.420894+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:23.421151+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:24.421373+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:25.421564+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:26.421723+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:27.421888+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:28.422032+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:29.422184+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:30.422363+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:31.422571+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:32.422699+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:33.422823+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:34.422933+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:35.423091+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:36.423213+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:37.423315+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:38.423417+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:39.423530+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95657984 unmapped: 12853248 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:40.423661+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 12836864 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:41.423769+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 12836864 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:42.423909+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 12836864 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:43.424093+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 12836864 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:44.424241+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 12836864 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:45.424375+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 12836864 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:46.424493+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95674368 unmapped: 12836864 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: osd.0 169 heartbeat osd_stat(store_statfs(0x4fa2a9000/0x0/0x4ffc00000, data 0x12d0261/0x13c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [1,2] op hist [])
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'config diff' '{prefix=config diff}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'config show' '{prefix=config show}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:47.424610+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 05:54:19 compute-0 ceph-osd[89151]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95682560 unmapped: 12828672 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: bluestore.MempoolThread(0x55c4e5b35b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1216158 data_alloc: 218103808 data_used: 471040
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:48.424743+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95731712 unmapped: 12779520 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: tick
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_tickets
Nov 29 05:54:19 compute-0 ceph-osd[89151]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-29T05:53:49.424865+0000)
Nov 29 05:54:19 compute-0 ceph-osd[89151]: prioritycache tune_memory target: 4294967296 mapped: 95379456 unmapped: 13131776 heap: 108511232 old mem: 2845415832 new mem: 2845415832
Nov 29 05:54:19 compute-0 ceph-osd[89151]: do_command 'log dump' '{prefix=log dump}'
Nov 29 05:54:19 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 05:54:20 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14981 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:20 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 05:54:20 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633157831' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 05:54:20 compute-0 ceph-mon[75176]: from='client.14981 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:20 compute-0 ceph-mon[75176]: pgmap v1529: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:20 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1633157831' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 05:54:20 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 05:54:20 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/202936755' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 05:54:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 29 05:54:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896162938' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 05:54:21 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 29 05:54:21 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684041957' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 05:54:21 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/202936755' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 05:54:21 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3896162938' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 05:54:21 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/2684041957' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 05:54:22 compute-0 podman[293971]: 2025-11-29 05:54:22.039153461 +0000 UTC m=+0.090521310 container health_status 7cab817740f9141615699dc5c7d593599e01657d7f8e9521e4c677c7784adb53 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller)
Nov 29 05:54:22 compute-0 systemd[1]: Starting Hostname Service...
Nov 29 05:54:22 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14991 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:22 compute-0 systemd[1]: Started Hostname Service.
Nov 29 05:54:22 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 29 05:54:22 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531690575' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 05:54:22 compute-0 ceph-mon[75176]: from='client.14991 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:22 compute-0 ceph-mon[75176]: pgmap v1530: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:22 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1531690575' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 05:54:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 05:54:22 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 29 05:54:22 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1621989596' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 05:54:23 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.14997 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:23 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 29 05:54:23 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028751212' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 05:54:23 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1621989596' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 05:54:23 compute-0 ceph-mon[75176]: from='client.14997 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:23 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1028751212' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 05:54:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15001 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:24 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15003 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:24 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:24 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 29 05:54:24 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1222303273' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 05:54:24 compute-0 ceph-mon[75176]: from='client.15001 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:24 compute-0 ceph-mon[75176]: from='client.15003 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:24 compute-0 ceph-mon[75176]: pgmap v1531: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:24 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/1222303273' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 05:54:25 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 29 05:54:25 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088766216' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15009 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: log_channel(audit) log [DBG] : from='client.15011 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0005435097797421371 of space, bias 4.0, pg target 0.6522117356905646 quantized to 16 (current 16)
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 29 05:54:25 compute-0 ceph-mgr[75473]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 29 05:54:25 compute-0 ceph-mon[75176]: from='client.? 192.168.122.100:0/3088766216' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 05:54:25 compute-0 ceph-mon[75176]: from='client.15009 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 05:54:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 29 05:54:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3594461797' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 05:54:26 compute-0 ceph-mgr[75473]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 75 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Nov 29 05:54:26 compute-0 ceph-mon[75176]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 29 05:54:26 compute-0 ceph-mon[75176]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712104975' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
